With at least four authorship credits on preprints and published articles, the artificial intelligence (AI) chatbot ChatGPT, which has seized the world by storm, has made its official debut in the scientific literature.
The appropriateness of citing the bot as an author and the presence of such AI tools in the published literature are now topics of discussion among journal editors, academics, and publishers. Publishers are scrambling to develop standards for the chatbot, which was made available as a free tool by San Francisco, California-based software startup OpenAI in November.
A large language model (LLM) called ChatGPT creates believable phrases by imitating the linguistic statistical patterns seen in a sizable body of material gathered from the Internet. The bot is already upending industries, including academia. It is particularly generating concerns about the future of academic research and writings.
Publishers and preprint servers concur that ChatGPT and other AIs do not meet the requirements for research authors since they cannot be held accountable for the integrity and substance of scientific studies. However, some publishers claim that acknowledging an AI's contribution to a paper's writing in places other than the author list is acceptable.
In one instance, an editor informed Nature that ChatGPT had been incorrectly listed as a co-author and that the publication would make the necessary corrections.
One of 12 writers on a preprint on utilizing the technology for medical education that was published on the medical repository medRxiv in December of last year is an artificial author ChatGPT.
According to co-founder Richard Sever, assistant director of Cold Spring Harbor Laboratory Press in New York, the team behind the repository and its sister site, bioRxiv, is debating whether it is permissible to utilize and credit AI technologies like ChatGPT when authoring papers. The conventions might alter, he continues.
The formal authorship of an academic publication must be distinguished from the more broad definition of an author as a writer of a document, according to Sever. According to him, only persons should be included since authors assume legal responsibility for their works. Of course, individuals may attempt to smuggle it in—this has already occurred at medRxiv—much as individuals have in the past put pets, imaginary characters, etc. as authors on journal publications. However, this is more of a checking problem than a policy one.
The AI is listed as a co-author of an editorial in the journal Nurse Education in Practice this month, along with Siobhan O'Connor, a health technology researcher at the University of Manchester in the UK. The main editor of the magazine, Roger Watson, claims that this credit was overlooked but will soon be fixed. Because editorials run via a separate management system than research papers, he claims, it was a mistake on their side.
Additionally, ChatGPT was listed as a co-author of a perspective article in the journal Oncoscience last month, according to Alex Zhavoronkov, chief executive of Insilico Medicine, an AI-powered drug-discovery business based in Hong Kong. He claims that his business has released over 80 papers made with generative AI technologies. They have experience in this sector, he claims. The most recent study weighs the benefits and drawbacks of taking the medication rapamycin within the framework of the Pascal's wager. According to Zhavoronkov, ChatGPT produced a significantly better essay than earlier iterations of generative AI technologies.
He claims that he requested the editor of Oncoscience to conduct a peer review of this manuscript. Nature asked the journal for comment, but they didn't get back to them.
According to co-author Almira Osmanovic Thunström, a neurobiologist at Sahlgrenska University Hospital in Gothenburg, Sweden, a fourth article, co-written by an earlier chatbot known as GPT-3 and posted on the French preprint server HAL in June 2022, will soon be published in a peer-reviewed journal. She claims that following review, one publication rejected the work; but, when she revised it in response to reviewer demands, another journal approved it with GPT-3 listed as an author.
The news staff at Nature was informed by the editors-in-chief of Nature and Science that ChatGPT did not adhere to the requirements for authorship. According to Magdalena Skipper, editor-in-chief of Nature in London, an allocation of authorship bears with it responsibility for the work, which cannot be properly applied to LLMs. She advises authors who utilize LLMs in any form to write a manuscript should explicitly state their usage in the methods or acknowledgements sections, if applicable.
Holden Thorp, editor-in-chief of the Science family of journals in Washington, DC, states that they would not allow AI to be named as an author on a paper they publish, and usage of AI-generated language without proper citation may be deemed plagiarism.
According to Sabina Alam, head of publishing ethics and integrity at Taylor & Francis in London, the publisher is now examining its policies. She acknowledges that writers are accountable for the accuracy and reliability of their work, and that any use of LLMs should be acknowledged. There haven't been any submissions to Taylor & Francis yet that list ChatGPT as a co-author.
According to scientific director Steinn Sigurdsson, an astronomer at Pennsylvania State University in University Park, the board of the physical sciences preprint server arXiv has held internal conversations and is starting to agree on a strategy for the employment of generative AIs. He acknowledges that, among other reasons, a software tool cannot be the author of a submission since it cannot approve of the conditions of use and the right to share material. There aren't any arXiv preprints that mention ChatGPT as a co-author, according to Sigurdsson, who also promises that author advice is on the way.
According to Matt Hodgkinson, a research-integrity manager at the UK Research Integrity Office in London, who is speaking in his individual role, there are already explicit authorship criteria that state ChatGPT should not be included as a co-author. One requirement is that a co-author must make a substantial academic contribution to the publication; he suggests that technologies like ChatGPT may make this feasible. But it also has to be able to accept a co-authorship and accept accountability for a research, or at least the portion to which it contributed. The concept of granting an AI tool co-authorship really runs into trouble on the second half, according to him.
Zhavoronkov claims that his attempts to get ChatGPT to produce articles that were more technical than the one he published were unsuccessful. If you ask it the same question more than once, it will likely give you various responses, he claims. It does quite frequently return the things that are not necessarily accurate. In light of the fact that those without subject-matter knowledge would suddenly be able to attempt to create scientific publications, the expert will undoubtedly be concerned about the misuse of the system in academia.