Dear Commons Community,
Violeta Berdejo-Espinola and Tatsuya Amano have a letter to the Editor of Science this morning questioning its recent editorial banning the use of text generated from AI in scientific articles. Here is the entire Berdejo-Espinola and Amano letter.
“In his Editorial “ChatGPT is fun, but not an author” (27 January, p. 313), Editor-in-Chief H. H. Thorp describes Science’s position on using artificial intelligence (AI) in scientific papers. The updated policy essentially bans the use of text generated from AI, machine learning, or similar algorithmic tools in articles. However, Thorp overlooks the potential of AI tools to improve equity in science by alleviating linguistic disparities.
Research has shown that nonnative English speakers need to invest much more effort than native English speakers when writing papers in English (1). Journals are more likely to reject or request revisions before acceptance of papers written by nonnative English speakers (2, 3). Human English translation and editing services are costly and time-consuming (4), creating a profound disadvantage for the career development and fair participation of nonnative English speakers in science.
Emerging AI tools, such as ChatGPT and DeepL, can proofread English text with high accuracy (5, 6). The availability of quality, free (or affordable) English editing presents an opportunity for nonnative English speakers, especially those in low-income countries, who often cannot afford to use human English editing services (1, 4). Reducing the technical and financial burden of editing and proofreading papers for nonnative English speakers would be a substantial step toward achieving equity in science.
Our relationship with AI should be a partnership, not a competition. Journal policies should allow authors to use AI tools to edit and proofread their manuscripts. Journal editors can ensure that humans wrote the original text by using the detection tools available [e.g., (7)]. In addition, they can request that authors declare the use of AI tools, as Nature does (8), or submit the original version as well as the AI-edited version of the manuscript for full transparency. Regardless of whether they use AI tools, authors will always be responsible for the language used and the content in their final text.”
I agree with the two authors on this. I think we need to find ways to partner with AI not ban it. Presently, I have twelve graduate students evaluating writing papers using ChatGPT. One of these students made the same comments as above about the benefits of this software for nonnative English speakers.
- T. Amano et al., EcoEvoRxiv 10.32942/X29G6H (2022).
- S. Politzer-Ahles, T. Girolamo, S. Ghali, J. Eng. Acad. Purp. 47, 100895 (2020).
- A. L. Romero-Olivares, Science 10.1126/science.caredit.aaz7179 (2020).
- V. Ramírez-Castañeda, PLOS ONE 15, e0238372 (2020).
- A. Katnelson, Nature 609, 208 (2022).
- S. Hun, Sci. Edit. 10, 1 (2023).
- J. Hendrik Kirchner, L. Ahmad, S. Aaronson, J. Leike, “New AI classifier for indicating AI-written text,” OpenAI (2023); https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/.
- Nature 613, 612 (2023).