With the recent advances made by ChatGPT in AI technology, and the writer’s strike in Hollywood, it seems everyone is talking about AI. It can be an effective problem solving tool in the right circumstances.
I first gave an AI tool a whirl with Bing’s Chat option and found it to be fabulously helpful. I’ve hated search engines for years because they so rarely return results that align with what I’m looking for. And even when they do, I have to click into at least half a dozen articles and skim them to find the information. Bing’s chat did all that work for me and returned a relevant answer to the question I asked. Great tool, especially compared to search engines.
AI Follows the Rules
For years Grammarly has served as a copy editor, picking out comma splices and run on sentences. Microsoft Word has been catching our spelling mistakes. But like all AI tools, they have to be programmed with the rules.
Grammar has a nice set of hard and fast rules to compare against. But Grammarly can’t pick out a wayward sentence that follows all the rules and yet just isn’t working. It can’t recognize a sentence that breaks all the rules but WORKS. I shudder to think what Grammarly would do with the first chapter of Shatter Me by Taherah Mafi which ignores so much of the rules but draws the reader into the world and introduces the main character in a very compelling way.
Humans Break the Rules
Larger story elements (plot, character, setting) and story structure don’t have the same hard and fast rules for an AI tool to work with. You could teach it to recognize a plot. Program in a 3 Act structure and 9 Act structure and teach it the difference between the two. A properly trained AI tool could analyze your story and determine how well (or poorly) your story follows those rules.
But how much value does that provide?
Is your story better for absolutely following the path of a 3 Act structure? Maybe. But maybe the actual problem is that there’s no tension and changing your structure won’t fix that problem.
A human editor isn’t bound to the confines of just following the rules they’ve learned through education and experience.
They can recognize a lack of tension because of what they feel when reading the story.
They can identify when characters aren’t developed well enough because of how the reader responds to various scenes.
They can decipher the balance of elements and help you see when setting is distracting from plot or plot is overwhelming your characters.
A human editor understands where you’ve broken the rules to make a joke.
No doubt those who work to create and train AI tools will try to make it capable of this sort of larger, more abstract analysis. But there’s an article by Luciano Floridi which highlights that even though humans comprehend the meaning of generated text, to the machine it’s still just 1s and 0s. Floridi reminds us that generative AI is doing “statistically—that is, working on the formal structure, and not on the meaning of the texts they process—what we do semantically.”
In other words, you can’t train meaning into binary code. You can’t train creativity and emotion into a machine. You can try, because the appeal of writing and editing faster and with less effort is apparent. We usually want things to require less hard work.
But there’s an aspect of human perception that can’t be reduced to facts and information. Blink by Malcolm Gladwell explores this concept of human perception that reaches beyond the obvious facts.
And there’s a certain creativity in problem solving, especially in the realm of storytelling, that can’t be replicated by an AI tool.