Should I Trust an Algorithm for a Title for My Articles?

3 min read

Technology has become our go-to resource for everything from medical testing to writing, and it will become a stronger force in the future.

Algorithms are becoming an unnatural part of our natural environment, and it leaves us questioning whether that’s a good thing or not. Our concerns are well-founded because the more we give up our intellectual creativity to machines, the more we are bound to lose something in the process.

Writing is an area where algorithms are becoming a dominant force in producing articles for the excessive demands of the 24/7 news cycle. Jobs are lost, surely, but what else is lost is more difficult to parse out. The subtitles of language and meaning are sometimes not programmed into the schema of our typical argot.

Right here, when writing the above paragraph, the underlying grammar/spelling checker in Medium insisted I meant things I never intended. It, if I can use that word here, wanted “schema” to be replaced with the name of an online company. “Argot” was to be replaced with “ergo” or something akin to the name of the ship of Jason and the Argonauts.

Photo by Niv Singer on Unsplash

Use an usual word, and you get a stupid replacement for it. Yes, algorithms can be stupid, not by design but by the coder’s ineptness. And it may get worse in the future as algorithms begin to write themselves. They call this “baked in” bias or whatever else they can call part of the original code and from which the new code progresses.

Even in usual English grammar, the algorithm wanted me to change “an unusual…” to “a unusual….” Incorrect because “unusual” begins with “u,” which is a word always preceded with “an.” Remember the “a, e, i, o, u” of your early learning?

The algorithm didn’t have that in its code. How did that happen? Will it learn from my refusal to accept this incorrect grammar? No, because I don’t access the original code as far as I know. And mistakes are there and could be serious. And I wasn’t asked why I wanted that change? Sometimes it does throw up a box with possible reasons you chose something other than what they suggested. Not this time.

It’s understandable. The multitude of digital avenues now available to us demand content with an appetite that human effort can no longer satisfy. This demand, paired with ever more sophisticated technology, is spawning an industry of “automated narrative generation.”

And the appetite of the content demands of media also incorporates something else; SEO. Whenever you write something, sensitivity to how it will be placed on major search engines must be foremost in your mind. I know it’s in my mind, and I’ve read enough articles to pound that one home.

So it is that when I’m writing an article of any kind, on any topic, I have to either scratch my head for a catchy title or go to an algorithm for help. I must admit that I’m inclined to get help from the algorithm as I present title after title to see what score each receives. Anything over a 70 is a go for me.

I still hold the reins on this monster in writing articles or titles, but that’s not always the case for media. Think how many headlines have misled you. What was your response? Were you angry, felt used, or wrote a letter to the editor if you could find contact info for that person or algorithm?

The impact on journalism and writing, in general, is a serious issue, and factors such as ethics and bias must be addressed. College essays, too, can be written by algorithms, and paper mills abound. Plagiarism is a prime area of concern as bots can be run to write anything the buyer wishes to procure.

Think bots aren’t taking over writing tasks? Think again. A major player of news, the Associated Press, is now using these programs to write articles. And this isn’t something new. The article I noted was written in 2015. In 2017, the prestigious Washington Post admitted that they use an algorithm for articles.

The production of articles written by automation bots is staggering, as has been pointed out. Today, it is undoubtedly even greater, and the errors are probably very well baked in by now.

We now have NLG (natural language generated) platforms to produce the written material. Who does the oversight to ensure it’s making the correct connections and assumptions? Another algorithm that could also be biased? This is serious.

Is Trust an Outdated, Old Commodity?

Long-form journalism earned many journalists and book authors their deserved place in our cultures. But if bots are writing, how do we know we can trust what we’re reading?

An editor in charge of the WAPO algorithm-written articles attempted to lower our anxiety level. …when the program first began in July, every automated story had a human touch, with errors logged and sent to Automated Insights to make the necessary tweaks. 

Full automation began in October (2017) when stories “went out to the wire without human intervention.” 

No humans were involved, and she said there were fewer errors then. This begs the question regarding what types of errors might not have been detected?

I’ve seen too many headlines in my various media feeds that were misleading and must have met that clickbait scoring I encountered in my title creation.

I remain a fan of the title creator I use, but I carefully go over it to see where it could be misleading or misunderstood. How much of the media produced meets that standard since we’ve already seen WAPO’s goes out “without human intervention.”

Patricia Farrell Patricia Farrell is a licensed clinical psychologist in New Jersey and Florida in the United States, a published author, former psychiatric researcher, educator and consultant to WebMD. She specializes in stress and medical illness and has been in the field for over 30 years. Prior to becoming a psychologist, Dr. Farrell held a number of editorial positions in trade magazine publishing and newspaper syndication. Her interests include photography, computers and writing both fiction and non-fiction.

Leave a Reply

Your email address will not be published. Required fields are marked *