Editor [and] AI: The Ethics of Using AI

Editor [and] AI: The Ethics of Using AI

Should an editor not use spellcheck? Should they not use PerfectIt? Of course they should! You’d be rightly incensed if they didn’t use spellcheck!

Those examples are not AI, but AI is just another tool, like those are tools. The key editcal use points are that 1) editors should not only run these tools and 2) they need to protect clients’ intellectually property from being added to the AI.

We can see from the Editor V AI tests that accepting every one of a program’s suggested edits usually leads to disaster. English is simply too complex and nuanced for the tools to get it right. [Yet?] Beyond its practicality, there are ethical and legal sides to this question.

AI Is Impractical

At the moment, the available large language model AIs like ChatGPT are more trouble than they’re worth, in editing. They don’t understand the principles editors follow such as “do no harm”, and “maintain the author’s voice”, or even “don’t make up facts and citations!” Editing is far more than moving commas and sleuthing typos, but AI isn’t even good at those things yet. At the moment, finessing the queries to make AI do the work requires a high degree of understanding the problem and takes more work than doing the edits myself. Never mind the errors it adds I then have to ferret out!

I keep saying “at the moment” because change is fast; exponentially fast. And we don’t know where on that exponential curve we are at the moment. The change in grammatical acumen between ChatGPT-3 and ChatGPT-4 was substantial, and the developers were already working on ChatGPT-11 when 3 was made public.

AI Uses the Intellectual Property

A key legal objection to using AI in editing is not that it is “cheating”, but that we can’t legally “share” our work in progress with others. We have a professional duty to keep it private, sometimes a contractual duty in the form of a “nondisclosure agreement” (NDA). Some contracts don’t even let you store files in the cloud (e.g., Dropbox or SharePoint), they’re so concerned about IP protections! That’s why when you see an editor asking for help in a professional forum, they (at minimum) anonymize any examples.

The issue is that artificial intelligence, by definition, grows as it is used with no further programming input. (That’s why spellcheck and even PerfectIt don’t meet the definition of AI.) Everything you ask AI about or show it becomes part of its knowledge base. This led some writers’ organizations to prohibited the use of their members’ writing in training AI (which asking an AI about it would do). Even without an NDA, clients may require that anyone working on it does not distribute or share their materials. This includes asking AI about it or uploading it to a cloud service, such as Grammarly, the Mac version of PerfectIt, or even to Dropbox (none of which are AI in the strict sense). 

map marker destination for Editing in Word book
For more about best practices in backup systems and protecting clients’ property (files), see Section 13 of Editing in Word.

There’s even some concern about feeding reference lists into AI because those become part of its knowledge base and someone’s research into sources has value. Some content, such as patented applications and other corporate intellectual property could be inadvertently leaked to competitors this way: Someone asks AI to help with A. Then someone working on B asks AI to help too, and AI drags up its past experience and processes something like “I have experience on A, and I will show you what it said because I learned from my interaction with user A and their questions are now part of my model.”

This relates to the legal objection that authors have launched against OpenAI, the makers of ChatGPT, as the company/organization used their copyrighted works to literally train the AI (to populate the large language model). This constant automated learning without ongoing direct programmer intervention is what distinguishes AI from other software like spellcheck. The organizations prohibiting the use of AI are saying not only that their content must not be scraped by the programs/AI but that workers can’t allow access to the content they’re working on by way of using AI tools.

If you’d like to learn more about AI, and the various functional and legal concerns, Grammar Girl has a newsletter that will give you a grounding in it relative to publishing-type concerns. If you’re more technically minded, you might find value in “The AI Dilemma” episode of the Your Undivided Attention podcast (from early days). You can also hear me interviewed in the early days of ChatGPT on Melanie Powers’ Deliberate Freelancer podcast.

AI Can’t Be Held Accountable

The second legal objection to using AI is that it must be credited (ethically) but cannot be cited as an author since it can’t be held accountable for its output. Several standards organizations and publishers have officially rejected AI being used (see the list at the end).

Create Your Policy

As publishers and other corporations (and NGOs and all organizations) update their contracts to address the use of such AI by their contractors, editors would be wise to consider their own policies and maybe even add such stipulations to the contracts they use with clients. Clients may see value in your guarantees that you will not be uploading their content to any AI systems! Like the policies rounded up below, you’ll want your stance to reassure clients that their IP will be protected while also allowing you to keep using spellcheck and all the macros and other non-AI tools you use to make your work more effective and efficient.

Roundup of Publishers’ & Writers Groups’ AI Policies

More policies will be added as they become available.

(Thanks to Melisa Bogen for rounding up 6 of these. Also see Erin Servais’ growing resource.)

Leave a Reply