[updated May 15, 2024]
Should an editor not use spellcheck? Should they not use PerfectIt? Of course they should! You’d be rightly incensed if they didn’t use spellcheck!
Those examples are not AI, but AI is just another tool, like those are tools. The key ethical use points are that 1) editors should not only run these tools and 2) they need to protect clients’ intellectual property from being added to the AI.
Ethical Standards Roundup
Check out the full roundup of ethical standards for editors around the world. Standards relating to the use of AI (and any technology) include:
EC: A11. As an editor, you should be aware of the various issues and options for language usage and know the available editorial resources and how to use them. Editing resources include dictionaries, manuals, databases, software applications, style guides and other reference materials that are often used in the trade. This list is not exhaustive, but it is intended to give you an understanding of what resources are commonly used to complete editorial work.
EC: A11.2 Use current technology, software and systems for working with and sharing materials with authors, clients and team members.
EC: A11.3 Maintain competency in software and software features relevant to editing (e.g., finding and replacing items, marking revisions and checking consistency).
IPEd: A4 Legal and ethical matters — …alert the publisher at the earliest opportunity to any possible legal problems…
IPEd: A6.1 Common word-processing software for editing. Includes use and development of templates, styles, revision mark-up (comments, track changes), tables of contents, footnotes and endnotes, macros, find and replace functions, and spellcheck.
IPEd: 3.1.2 Continuing professional development (CPD) — …maintain, improve and update their skills and knowledge, especially where new technology creates changes in publishing practice. …
CIEP: 3.1.6 Responsibility to clients — …making the best use of the time available …to the required standard within …schedule….
CIEP: 3.1.9 Original material and records — …ensure the safe keeping of documents
CIEP: 3.1.12 Subcontracting — …not subcontract work to others without the knowledge and consent of the client. …remain responsible for the terms …and …quality ….
CIEP: 3.4.4 Relating to documents — …ensure the safe keeping and subsequent disposal or return of confidential documents….
We can see from the Editor V AI tests that accepting every one of a program’s suggested edits usually leads to disaster. English is simply too complex and nuanced for the tools to get it right. [Yet?] Beyond its practicality, there are ethical and legal sides to this question.
AI Is Impractical
At the moment, the available large language model AIs like ChatGPT are more trouble than they’re worth, in editing. They don’t understand the principles editors follow such as “do no harm”, and “maintain the author’s voice”, or even “don’t make up facts and citations!” Editing is far more than moving commas and sleuthing typos, but AI isn’t even good at those things yet. At the moment, finessing the queries to make AI do the work requires a high degree of understanding the problem and takes more work than doing the edits myself. Never mind the errors it adds I then have to ferret out!
I keep saying “at the moment” because change is fast; exponentially fast. And we don’t know where on that exponential curve we are at the moment. The change in grammatical acumen between ChatGPT-3 and ChatGPT-4 was substantial, and the developers were already working on ChatGPT-11 when 3 was made public.
AI Uses the Intellectual Property
A key legal objection to using AI in editing is not that it is “cheating”, but that we can’t legally “share” our work in progress with others. We have a professional duty to keep it private, sometimes a contractual duty in the form of a “nondisclosure agreement” (NDA). Some contracts don’t even let you store files in the cloud (e.g., Dropbox or SharePoint), they’re so concerned about IP protections! That’s why when you see an editor asking for help in a professional forum, they (at minimum) anonymize any examples.
The issue is that artificial intelligence, by definition, grows as it is used with no further programming input. (That’s why spellcheck and even PerfectIt don’t meet the definition of AI.) Everything you ask AI about or show it becomes part of its knowledge base. This led some writers’ organizations to prohibited the use of their members’ writing in training AI (which asking an AI about it would do). Even without an NDA, clients may require that anyone working on it does not distribute or share their materials. This includes asking AI about it or uploading it to a cloud service, such as Grammarly, the Mac version of PerfectIt, or even to Dropbox (none of which are AI in the strict sense).
There’s even some concern about feeding reference lists into AI because those become part of its knowledge base and someone’s research into sources has value. Some content, such as patented applications and other corporate intellectual property could be inadvertently leaked to competitors this way: Someone asks AI to help with A. Then someone working on B asks AI to help too, and AI drags up its past experience and processes something like “I have experience on A, and I will show you what it said because I learned from my interaction with user A and their questions are now part of my model.”
This relates to the legal objection that authors have launched against OpenAI, the makers of ChatGPT, as the company/organization used their copyrighted works to literally train the AI (to populate the large language model). This constant automated learning without ongoing direct programmer intervention is what distinguishes AI from other software like spellcheck. The organizations prohibiting the use of AI are saying not only that their content must not be scraped by the programs/AI but that workers can’t allow access to the content they’re working on by way of using AI tools.
If you’d like to learn more about AI, and the various functional and legal concerns, Grammar Girl has a newsletter that will give you a grounding in it relative to publishing-type concerns. If you’re more technically minded, you might find value in “The AI Dilemma” episode of the Your Undivided Attention podcast (from early days). You can also hear me interviewed in the early days of ChatGPT on Melanie Powers’ Deliberate Freelancer podcast.
AI Can’t Be Held Accountable
The second legal objection to using AI is that it must be credited (ethically) but cannot be cited as an author since it can’t be held accountable for its output. Several standards organizations and publishers have officially rejected AI being used (see the list at the end).
Create Your Policy
As publishers and other corporations (and NGOs and all organizations) update their contracts to address the use of such AI by their contractors, editors would be wise to consider their own policies and maybe even add such stipulations to the contracts they use with clients. Clients may see value in your guarantees that you will not be uploading their content to any AI systems! Like the policies rounded up below, you’ll want your stance to reassure clients that their IP will be protected while also allowing you to keep using spellcheck and all the macros and other non-AI tools you use to make your work more effective and efficient.
Roundup of Publishers' & Writers Groups' AI Policies
- Science [journal] prohibits AI use in papers (not in research)
- AMWA “Leveraging Artificial Intelligence, Natural Language Processing, and Natural Language Generation in Medical Writing“
- Nature allows using AI for writing when documented (but doesn’t allow it to be “an author”) but prohibits use for images: ground rules for AI use, authorship policy
- Nature Reviews Urology “Artificial intelligence in academic writing: a paradigm-shifting technological advance“
- American Chemical Society “Best Practices for Using AI When Writing Scientific Manuscripts” in Nano
- Elsevier “The use of AI and AI-assisted writing technologies in scientific writing” and “Generating scholarly content with ChatGPT: ethical challenges for medical publishing” in Lancet Digital Health
- Taylor & Francis “Defining authorship in your research paper“
- JAMA discourages the use of AI in its “Instructions to Authors” and says it can’t be a co-author; also see in Network “Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge“
- WAME (and British Medical Journal) “Chatbots, ChatGPT, and Scholarly Manuscripts“
- ICMJE recommendations: “Defining the Role of Authors and Contributors“
More policies will be added as they become available.
(Thanks to Melisa Bogen for rounding up 6 of these. Also see Erin Servais’ dictionary of AI.)