Artificial intelligence that advances real human creativity
This branding hits me straight on, and not in a good way.
First off, I’m bothered by “real.” It juxtaposes nicely with “artificial,” but in these days of fake everything, must we now also worry that creativity has gone the way of eyelashes and news?
I suppose this development was inevitable. After Jodie Archer and Matthew L. Jockers published the results of their bestseller project, in which they ran text through a computer program to determine whether they could predict a bestseller, it was only a matter of time before someone figured out how to monetize it.
With their “real human creativity” branding, Authors.me can’t be the only platform now offering some variation of this model—for a small fee, you can “harness the power of the machine to understand your content’s potential.” But it’s the one I happened upon while researching a magazine assignment.
Click the “Get Smart” button, and you’ll see the full range of what Authors.me provides. Intelligent Editorial Analysis. An aggregate numerical evaluation. A “bird’s eye view” of your manuscript with regard to readability, diction, point of view, syntax, and word length.
A sample readout on diction slathers on this praise: “The software measured 527 multi-syllabic words in your manuscript, which places it in the typical range of bestsellers. This contributes to your book’s accessibility and potential audience size.”
Commissioned reports also address Story Arc, matching the submitted manuscript against six “literary archetypes.” Man-in-a-hole is one of the six. Archetype, it seems, has taken on new dimensions since I last studied literary criticism.
To “visualize” your book toward this end, the computer conducts a sentiment analysis. I can’t help but think that there are multiple markets for this service. Dating sites. Weddings. Funerals. Really, the sky’s the limit.
Oops. A cliché, something AI (Artificial Intelligence) would be happy to point out had I ponied up $49.99 for an analysis of this post.
I work as a freelance editor, so admittedly I’m biased toward the human in these endeavors. But there’s something to that old adage about getting what you pay for. If you commissioned this sort of work from me, or from any editorial professional, you’d expect to get a lengthy editorial letter that digs deep into the substance of your work. With this report, you get “Your writing uses hyperbole sparingly, which is ideal. You are a measured, objective writer” and “Explicit language was counted 33 times.”
Now that vacuum cleaners and cars drive themselves, I suppose it’s pointless to quibble. And yet there are those of us who do this work, writing and editing, who continue to believe that creativity is a process that defies algorithms.
Worthy of note: The list of publishers using authors.me to screen manuscripts is substantial and growing.
On second thought, strike that colon. A readability analysis suggests you won’t like it.
Deb Vanasse is the author of seventeen books with six different publishers. Among the most recent are Write Your Best Book, a practical guide to writing books that rise above the rest; Cold Spell, a novel that “captures the harsh beauty of the terrain as well as the strain of self-doubt and complicated family bonds; and the “deeply researched and richly imagined” biography Wealth Woman. After thirty-six years in Alaska, she now lives on the north coast of Oregon, where she relies on her non-algorithmic brain for creative work that seems real enough to her.
Deb,
Your review of author.me confirms just how limited technology is when applied to writing. It’s disturbing to know that computer editing companies are on the rise. Maybe time and unsatisfactory results will reveal their weakness. Unfortunately, this flawed technology infiltrates education as well. About 8 years ago, the Anchorage School District subscribed to a writing analysis service at $35 per student per year to analyze students’ writing. The service purported to evaluate a piece of writing in 6 different areas, including mechanics, organization, voice and word choice. As an English teacher, I was skeptical. My students grew frustrated when the program penalized students for using place names or proper nouns it didn’t recognize. Furthermore, it couldn’t differentiate dialogue from exposition. My more clever students learned that it rewarded for length, even if the additional sentences were pure gibberish. One day, I cut and pasted entire paragraphs from an E.B. White essay into a student essay and watched the program spit out perfect scores. Yet, the added paragraphs were randomly placed and had nothing to do with the writing prompt, proving the program could measure neither content nor organization, two areas middle school students struggle with the most. I shared the results with the head of curriculum and assessment. The program’s use became optional the next year and the program was eliminated from our school district within three years. Unfortunately, I still don’t know if its disappearance had to do with the program’s ineffectiveness, or whether it was a budgetary decision. Even more disturbing, there were teachers who promoted its use and lamented its discontinuance. It’s just a matter of time before a future school board buys into another technology sales pitch to evaluate student writing. We need to continue to defend the human brain as the only effective editor. Thanks for the instructive post.
Yikes…worrisome indeed that this has been inflicted on young writers. Thanks for sharing this!