Should an editor not use spellcheck? Should they not use PerfectIt? Of course they should! You’d be rightly incensed if they didn’t use spellcheck!
Those examples are not AI, but AI is just another tool, like those are tools. The key is, editors should not only run these tools, accepting every one of their suggestions. That way disaster lies. English is simply too complex and nuanced for the tools to get it right. [Yet?] Beyond that, there are many sides to the ethics question.
AI Is Impractical
At the moment, the available large language model AIs like ChatGPT are more trouble than they’re worth, in editing. They don’t understand the principles editors follow such as “do no harm”, and “maintain the author’s voice”, or even “don’t make up facts and citations!” Editing is far more than moving commas and sleuthing typos, but AI isn’t even good at those things yet. At the moment, finessing the queries to make AI do the work takes more work than doing the edits myself, and it inserts more errors I have to ferret out!
I keep saying “at the moment” because change is fast; exponentially fast. And we don’t know where on that exponential curve we are at the moment. The change in grammatical acumen between ChatGPT-3 and ChatGPT-4 was substantial, and the developers are already working on ChatGPT-11.
AI Uses the Intellectual Property
Several standards organizations and publishers have officially rejected AI being used as an author (see below), and some writers’ organizations have prohibited the use of their members’ writing in training AI. That means not only that their content must not be scraped by the programs but that editors can’t allow access to that content by making use of the tools, since the AI learns from every interaction. That’s how it differs from other programs: it’s constantly learning, without ongoing direct programmer intervention. Beyond that concern, feeding files into AI is likely to violate security guarantees the editor has made in their contract and other intellectual property considerations such as non-disclosure agreements (NDAs).
Create Your Policy
As publishers and other corporations (and NGOs and all organizations) update their contracts to address the use of such AI by their contractors, editors would be wise to consider their own policies and maybe even add such stipulations to the contracts they use with clients. Like the policies rounded up below, you’ll want your stance to reassure clients that their IP will be protected while also allowing you to keep using spellcheck and all the macros and other tools you use to make yourself more effective and efficient.
Roundup of Publishers’ & Writers Groups’ AI Policies
- Science [journal] prohibits AI use in papers (not in research)
- AMWA “Leveraging Artificial Intelligence, Natural Language Processing, and Natural Language Generation in Medical Writing“
- Nature allows using AI for writing when documented (but doesn’t allow it to be “an author”) but prohibits use for images: ground rules for AI use, authorship policy
- Nature Reviews Urology “Artificial intelligence in academic writing: a paradigm-shifting technological advance“
- American Chemical Society “Best Practices for Using AI When Writing Scientific Manuscripts” in Nano
- Elsevier “The use of AI and AI-assisted writing technologies in scientific writing” and “Generating scholarly content with ChatGPT: ethical challenges for medical publishing” in Lancet Digital Health
- Taylor & Francis “Defining authorship in your research paper“
- JAMA discourages the use of AI in its “Instructions to Authors” and says it can’t be a co-author; also see in Network “Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge“
- WAME (and British Medical Journal) “Chatbots, ChatGPT, and Scholarly Manuscripts“
- ICMJE recommendations: “Defining the Role of Authors and Contributors“
More policies will be added as they become available.
(Thanks to Melisa Bogen for rounding up 6 of these.)