A new study reported by Dr. Martin Májovský and collaborators has shown that artificial intelligence (AI) language models like ChatGPT (Chat Generative Pre-trained Transformer) have the potential to generate false scientific articles that appear exceptionally authentic.
The study was reported in the Journal of Medical Internet Research on May 31st, 2023.
This breakthrough increases crucial concerns regarding the integrity of scientific research and the reliability of published papers.
Scientists from Charles University, Czech Republic, aimed to examine the abilities of current AI language models in making high-quality fraudulent medical articles.
The famous AI chatbot ChatGPT was utilized by the team, which runs on the GPT-3 language model that has been developed by OpenAI, to produce an entirely fabricated scientific article in the neurosurgery field. Questions and prompts were polished as ChatGPT-generated responses, thereby enabling the quality of the output to be iteratively enhanced.
The outcomes of this proof-of-concept study were remarkable—the AI language model was successful in producing a fraudulent article that closely mirrored a genuine scientific study in regard to word usage, sentence structure, and overall composition.
The article consisted of standard sections, including an abstract, introduction, results, methods, discussion, and also tables and other data. Astonishingly, the complete process of article creation took just 1 hour without any unique training of the human user.
While the AI-generated article appeared to be an advanced and flawless one on the subject, at closer examination, expert readers were able to identify semantic errors and inaccuracies, especially in the references—a few references were incorrect, while others were nonexistent.
This highlights the need for increased vigilance and improved detection methods to fight the possible misuse of AI in scientific research.
This outcome of the study stresses the significance of coming up with ethical guidelines and best practices for the use of AI language models in real scientific writing and research. Models like ChatGPT can improve the efficiency and precision of result analysis, document creation, and language editing.
By making use of such tools with care and responsibility, scientists can exploit their power while reducing the risk of misuse or abuse.
In a commentary on Dr. Májovský’s article, Dr. Pedro Ballester discusses the need to prioritize the visibility and reproducibility of scientific works, as they act as necessary safeguards against the flourishing of fraudulent research.
As there is continuous progress in AI, it will be vital for the scientific community to verify the precision and authenticity of content that has been produced by such tools and to carry out processes for detecting and avoiding fraud and misconduct.
While both articles agree to the fact that there needs to be a better way to confirm the precision and authenticity of AI-generated content, how this could be achieved is yet to be understood clearly.
“We should at least declare the extent to which AI has assisted the writing and analysis of a paper,” indicates Dr Ballester as a starting point.
One more possible solution suggested by Majovsky and collaborators is the compulsory submitting of data sets.
Journal Reference
Májovský, M., et al. (2023) Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora’s Box Has Been Opened. Journal of Medical Internet Research. doi.org/10.2196/46924.