The Right Tool for the Right Job: Understanding Where AI Helps (and Where It Can’t)

Tom Waddell, Chief Engineer, PerfectIt

When it comes to PerfectIt, I usually make my contributions in the code that comes to life when you run the product. But not long ago, I found myself at an editors’ conference, replacing code with conversations. And I was struck by a common worry: choosing editing tools has become a confusing and daunting task, thanks to the whirlwind of new AI technologies. A lot of people I spoke to seem lost in a maze of tech talk and big promises, afraid of wasting time and money on the wrong things and risking security. But they also fear missing out, knowing that the right tools can make their work faster and more efficient.

The speed of today’s technological changes may be unprecedented, but the essence of adaptation remains the same as it has throughout history. The key lies in picking the best tool for each specific task to improve overall outcomes. I decided to write this article to share a practical framework I use when I’m designing tools, to help editors to make sense of AI and automation options, and make the choice of editing tools less of a headache and more of a clear path to doing great work.

From Pins to Papers: The Journey of Specialization

Specialization isn’t a new trick; it’s a productivity technique that’s been used for hundreds of years. Adam Smith highlighted this in 1776, using a pin factory as an example in ‘The Wealth of Nations’. In the factory, each worker focused on a small, specific task.

"One man draws out the wire, another straights it, a third cuts it, a fourth points it, a fifth grinds it at the top for receiving the head."

Ford’s assembly line follows the same principle. And there is no difference with document production. Take the production of a book: there’s a part for the writer, another for the editor, and another for the publisher. Or we can zoom in further. We can break the work that, for example, a proposal writer does into specific tasks such as researching the subject, speaking to subject-matter experts, compiling a first draft, honing that draft, and editing the text.

Putting Each Task into a Box

Whether you spend time thinking about workflows or not, we’re all used to the idea of breaking down tasks into smaller, more manageable ones and giving them labels. Any process or workflow is just a series of steps. These steps can be assigned to people, machines, or a combination of both. What matters is how each stage is divided and the results it produces.

The Person in the Box

Figuring out if a machine is a good fit for a task is usually pretty straightforward. We look at the results of the machine’s efforts and decide whether the speed increase is worth the cost. What makes general purpose AI (like ChatGPT) more complicated to assess is that it’s more like considering whether a person can do a task. Humans are also general-purpose systems.

Inspired by Alan Turing’s "Imitation Game" test, I find it helps to imagine giving the same task to both a person and an AI, picturing putting each in their own box so you can’t see who is performing the action. Then I ask, can the AI do the task so well that it’s indistinguishable from the person?

This is where general purpose AI really gets put to the test in editing workflows. And in my opinion, the work they do isn’t good enough.

The Big Box Blunder

If you ask ChatGPT to edit a document, it does a horrible job. You can give ChatGPT more detailed instructions, and its results might improve. But they still don’t hit the mark.

The problem is that the task is too complex. The box that the person has been put in is too big.

A human editor doesn’t just march along to a set of rules. An editor understands the subtlety of correcting and fine-tuning text while preserving the author’s voice. They know when to query the author and when to take a step back.

ChatGPT doesn’t truly understand what the author wants to convey. It’s editing based on probabilities. Hand it an entire document to edit, and verifying its changes becomes a chore – you’d pretty much have to re-edit the whole thing to catch all its mistakes.

Why does ChatGPT do this so badly?

Size Matters

The challenge with general purpose AI is determining what you want it to do. With people, we’ve packaged up the different tasks and put them all into one big box with a job title (an editor). But that box is far too big to give to an AI tool. What if we break that role back down and start looking at the specific tasks and workflows? When thinking about what to get AI to do, it’s important to break the workflow down into small, manageable tasks before you apply the “person in a box” concept. You need to focus on a specific task (or set of tasks) that a human would previously have done.

Then at each stage you can ask ‘Who’s best for the job? A person or an AI?’ Often, a results-oriented assessment will find that an AI can do the task. So it’s all about getting the size of the task right.

A Real-World Example

Let me share an example that might seem silly at first but is the real-life inspiration for one of our products. Here are two tasks, given to an AI. The only thing that differs between them is the size of the task (a document in one case, a single sentence in the other):

Task 1: I ask an AI to make my document plain English. It gives me back a document that it believes is in plain English.

Task 2: I ask the same AI to make my sentence plain English. It returns a sentence that it believes is in plain English.

The results of the first task are terrible. The AI takes more time to work with the full document. It costs more (in terms of processing resources) too. And the AI has a poor understanding of all the factors that need to go into plain English. It doesn’t understand the audience. It doesn’t know the right information to emphasize. It doesn’t do a good job of making the right information easier for readers to find or to engage with.

The results of the second task are usually astoundingly good. Yes, it occasionally gives back nonsense. And it can miss important nuance. But it’s fast and the cost in processing resources is low. The instances that it doesn’t give good results are a reasonable price to pay as long as there is a person checking to see whether the output is good or not.

This isn’t just theory. It’s the practical insight that led us to create Draftsmith. We discovered that AI shines in following instructions when the box is small (like tidying up a sentence) but fumbles when the box is large (like fixing an entire document).

The Importance of Human Checks

As the example shows, there’s more to choosing the right tool than just the size of the box. It also hinges on the results, and what you plan to do with them.

Just like people, AI makes mistakes. So, if you decide to use AI, it should either outperform a person doing the task by making fewer mistakes or be so cost-effective that the higher number of mistakes doesn’t matter.

In practice, the solution to a higher number of mistakes is human verification. A person can check the AI’s outputs and correct where needed. That combination is sometimes the most efficient and accurate way to approach a task.

A Helper for Editor’s Block

Have you ever found yourself staring at a sentence, thinking, “That’s a bad sentence” and then, when you try to restructure it, your mind goes blank? Nearly everyone gets hit by editor’s block and often we just need a gentle nudge to get us over the hump and get started. You may turn to a colleague sitting nearby for a quick bit of inspiration, but that’s not always an option. Now with AI you always have that “person” available to you. Ask an AI to reword a sentence and it might not get it completely right, but, more often than not, it can provide enough inspiration to shift perspective and think about that sentence in a different way. This frees you to use your skills efficiently to craft beautiful sentences.

Cost and Data Security

If you’re thinking about using AI or any automated tool for a task, mistakes and checking aren’t the only things you need to consider. Two other critical factors are the costs involved and what happens to your data.

First, let’s talk about costs. One of the most transformative aspects of ChatGPT is its affordability. It’s not just what it does, it’s that it does it at a manageable price. Yet, it’s essential to look at the whole picture. Consider the additional time spent reviewing and fixing the AI’s work. In editing tasks, the gains are often less than expected when those are factored in. However, you also need to factor in the broader impact. The right tool for the job can improve efficiency, save time and improve business outcomes through improved output quality.

Data security is an essential consideration. AI hasn’t won everyone’s trust, particularly in the corporate world. If you’re working with sensitive data, it’s crucial to know where your data goes and to be sure that it’s not being used for training models or other purposes. Security concerns also vary across different workflows, even within the same industry. A regulatory submission may well carry heightened security concerns compared to, for example, materials for medical communications that are based on published research.

Choosing Between AI and Other Automation Options

I hope the concept of assigning a task to a ‘person in a box’ gives you a more useful framework for evaluating AI. AI comes with a lot of excitement and, in some cases, it’s well-deserved because it can genuinely do amazing things. At the same time, a lot of the hype is just hype. You don’t have to follow the crowd and quickly adopt AI because a software producer is overly excited about a tool that will probably be out of date by next summer.

Our approach is simple. We offer both AI tools and other automation options, allowing you to choose the technologies that add most value to your work. We’re introducing Draftsmith for situations where AI can make a difference. However, we continue to believe that AI isn’t always the right answer. That’s why we’ll keep investing in PerfectIt, a non-AI option that runs without connecting to the internet or storing or sending any data anywhere. We’ve added PerfectIt for PowerPoint to our lineup (and I’m already working on PerfectIt 6).

AI and other automation tools should help you accomplish more. Thanks to PerfectIt, editors spend less time today searching for little consistency mistakes and more time helping authors on the parts of a document that matter. When considering your workflows, identify tasks that can be put in a box and passed to an AI. Draftsmith should give you some of those. And there are others too. None of them will be replacing editors any time soon!

Previous
Previous

Why We’re Not Adding AI to PerfectIt

Next
Next

Business Management Efficiency