Editing students have started asking if there’s even a point in pursuing this career, now that AI is on the scene. Yes! And maybe, one day, the AI will even make our job easier and more consistent. But as we’ve seen in this series already, AI is not ready to take over editing yet. Not by a long shot. Does AI present opportunities? Hopefully!
This reminds me of when I started editing, back in the late 90s: editing was moving away from paper hard copy and onto computer screens. I got a foot in the door of publishing because I was excited about computers at a time when a great many publishing veterans were choosing retirement over learning this particular new trick. Some veterans stayed and thrived with computers, of course. But the exodus opened a gap for me to fill.
While I don’t have a crystal ball, I do think we both 1) have some time before AI takes over our jobs, and 2) have an opportunity to be on the leading edge of harnessing these tools. Mind you, as Helen King enumerated in her chat with Scholarly Kitchen (excerpted below), AI is not new; ChatGPT just caught the public’s attention.
At the moment, AI is making editing jobs secure. Proofreading ones, too! As long as we can show people how it fails at everything beyond detecting the most glaring typos, humans will be needed. (See the posts in this Editor Vs AI series for examples!) But with the speed of advancement, I don’t know whether it will take decades or months for GPT and other AI to start doing a good enough job to replace human QC measures. It sure hasn’t been a speedy takeover, to date! Remember how Clippy was supposed to revolutionize our work? MS taking over GPT might be the biggest threat to its development yet. (Remember Skype? No, no one does. MS bought it and then it faded into the background, it seems.)
AI isn’t new. As this webinar summary reports, some have been put to use to enhance peer review tasks already. Maybe we didn’t recognize it as AI?
“Helen King: It’s hard to think of an area where AI isn’t touching the publishing workflow! If I go through the full scientific publishing flow, there are AI tools to support article writing (such as PaperPal and Writefull), article submission (such as Wiley’s Rex, which automatically extracts data from your paper); tools to screen manuscripts on submission like Penelope and RipetaReview; and tools to support peer review such as SciScore for method checking, and Proofig and ImageTwin for scientific image checking, though statistics checking is not yet quite so developed. There are many AI-based tools and services to support finding reviewers. Interesting work is being done by Scite.ai around citation analysis to show how citations support the arguments in the paper. At the production stage there are lots of tools to create proofs, especially in book publishing, and many publishers are using automated triaging or copyediting services. Post-publication there are search engines and recommender tools that use AI to categorize content, and help with the ‘marketing’ of papers by using information about what you have read to suggest ‘what should I look at next?’ —Helen King, Head of Transformation at SAGE Publishers who hosts the influential blog, PubTech Radar“
“Paul Groth: That’s a great list! Next to those I see two main areas: first, summarization: e.g., by Scholarcy, which is right here, right now! —Paul Groth, Professor of Algorithmic Data Science at the University of Amsterdam, and scientific director of the UvA Data Science Center. He previously worked as Disruptive Technologies Director at Elsevier and is a former Board member of Force11″Anita Ward, “AI and Scholarly Publishing: A View from Three Experts.” The Scholarly Kitchen. 18 Jan. 2023
The Scholarly Kitchen post includes some great discussion of what qualifies as AI, too.
Some publications might turn to AI-editors for cost savings, regardless of quality. They might be happy with the tradeoff, but some won’t be. Some will demand the rigour and quality of a human editor. Another colleague, Hazel Bird, compared this to being a bespoke service, with a comparable appeal of (in my words) buying from an artist instead of Ikea.
“…despite the ubiquity of ready-made, off-the-shelf furniture, the bespoke market is continuing to grow at a healthy rate… . We can buy a chair in a fairly generic design, made in a factory with (hundreds of) thousands of other items. But, if our finances allow, many of us will instead choose something more personalised, made by identifiable human craftspeople from a carefully thought through design, using materials we choose within a company whose ethos we support.”— Hazel Bird, “Why AI won’t replace human editors – and AI agrees“
Creating Even More Work
The “predictive text” and “translation” AIs are getting more and more useful. We have been getting more use of them over time on our phones and in Gmail’s suggested replies, for example. AI has come a long way from Clippy. (If that’s a mystery to you, ask an elder. Their story will be entertaining!) In the current state of AI, it has the potential to create MORE editing work, especially in the areas of flow, internal consistency and logic, and fact checking. Our translator colleagues will assure us that machine translation is doing an absolute toddler job of it, despite the utility of the in situ translation my phone will do to help me communicate with the person in front of me! Translation apps are creating far more work than ever, in cleaning it up to usable standards or, more often, redoing the translation — especially where content has legal and safety implications.
Where Humans Rule
While AI struggles to detect deeper grammatical errors such as dangling modifiers, it’s downright blind to logic and flow. Then there’s its inability to detect plagiarism (which a future post will investigate), bias and legal exposure, or to fact check. (See the Professional Editorial Standards for an enumeration of all editing concerns beyond grammar.) GPT sounds authoritative when it writes (or edits), but it makes up a lot and makes considerable mistakes in general knowledge, leading some to compare its output to mansplaining.
Bias is a serious concern with the AI as well. These tools can only build on what they’ve been fed (programmed with) and, as Rhonda Bracey explains, we don’t know what that was but we might assume it was largely white middle-class male content—or perhaps WEIRD: Western, educated, industrialized, rich and democratic. This plays out in some predictable ways, such as when I asked the art AI to create an image of an out-of-work editor picketing about AI taking their jobs: all images it produced were of very dark-skinned people. I switched the request to an android to avoid this bias/prejudice. Still, the androids it offered were all white.
Create Job Security by Embracing All Tools
Just as editors added value to their offering when they learned how to use on-screen markup and word processors, so too do we have the opportunity to enhance our offering with AI tools. Knowing when they’re useful and when they fail us is an excellent first step.
At the moment, chatbots need users to ask specific questions. As the articles I’ve linked to show, it’s not as simple as saying “edit this.” It won’t. And asking all of the specific questions necessary to edit a text and then evaluating which results are useful and which are harmful will take a human quite a bit of time. The human brain supercomputer trained to edit is likely to do these processes faster without querying a GPT. (See usage note at end.)
Using AI as a colleague to bounce ideas off of can be useful, as Rhonda has been experimenting with. It might give productive suggestions for “rewrite this as a 3rd-grader” or “summarize this,” but with all of the factual errors it makes, its products will still need your professional discretion.
Using AI to enhance our work helps the bottom line as well as the product output. For now, however, I don’t think advertising “I edit using AI” will be a selling feature to anyone who doesn’t just want to use the AI themselves; it’s shoddy output. Editors aren’t quite reduced to “terminal operators” yet!
GPT stands for Generative Pre-trained Transformer and ChatGPT is only the interface that is popular at the moment. You might think GPT stands for generated predictive text, but the “pre-trained” and “transform” parts of its name are key.
Thanks to the many people who forwarded articles to me about this topic, including Katharine O’Moore-Klopf and Erin Brenner and Rhonda Bracey and Josh Cowan and my family… I like being known for this much more than I liked being known for concussion recovery.
Also an older post about AI in scholarly publishing, from LetPub.
The image for this post was created by giving DALL-e 2 image generator the prompt: android picketing outside a newspaper office holding a blank sign (because it produces a gibberish sign if not told to make the sign blank) with words added by me.