Artificial intelligence

6 ways to spot AI-authored copy

Illustration showing human working alongside AI

Generative AI can do some amazing things. It’s a painter and musician and coder and, of course, author.

How good it is at performing those roles is a topic up for debate. AI artwork regularly drifts into accidental surrealism, with superflous human limbs and bizarre fusions of objects.

But what about AI-generated copy? While the glitches can be glaringly machine-like in a picture, they’re more subtle in a passage of text. Here’s how to spot them.

Repeat offence

My father was an avid reader and writer. He’d often take a keen interest in the essays that I wrote for school. One of his most useful pieces of advice was to avoid repeating myself.

He was right. Repetition weakens writing. A lack of variety in phrasing can make an article dull. Redundancy labours a point through duplication. Human authors do their best to avoid these.

A computer on the other hand will be unlikely to police itself to nearly the same level. Snippets on a topic will be pulled from here and there and this and that to build an article. There’s a strong probability that key points will be repeated over and over and over*.

*Sorry, blatant repetition, I know.

Yesterday’s news

Generative AI platforms are trained on huge sets of data. Unless the platform in question has live access to the internet, its knowledge base only extends as far as the last update. The platform would not be privy to latest developments on any given topic.

Old news is unengaging at best and misleading at worst. Humans and search engines alike favour high quality, original content. Out-of-date doesn’t necessarily mean no longer correct. It can simply be information that has become so commonly known that further publication is redundant. Customers prefer personalised email to non-personalised!? Hold the front page!

Get your facts right

If you’re using a generative AI tool to produce or aid articles, never take it for granted that the software knows what it’s talking about. Because, technically speaking, it does not know what it is talking about. It algorithmically reproduces and combines content from multiple sources – which can include information that is no longer true, or perhaps has never been.

As a reader, keep an eye out for factual errors and especially contradictions. If it smells fishy, trust your instincts and verify the information elsewhere.

What’s the story?

A good quality article written by a human has a story-like flow. There’s a beginning and a conclusion. Computer-generated articles on the other hand often hit an abrupt end.

And what’s a story without a message? A good story makes you think and feel something. A robotic author literally feels nothing, so why should you as a reader?

Don’t you dare

Language models by default are clinically impartial. A platform won’t automatically spit out a controversial opinion that makes you stop in your tracks. It’ll compile a collection of neutral statements of fact.

You can coax it out of its formal shell of course with prompting. The results are perfect – if you’re aiming for a plasticky have a nice day flavour.

A human’s opinion piece carries real emotion and real sentiment. Even an article that you fervently disagree with can be an excellent read. There’s a human-to-human spark that is missing with AI.

It just feels… off

You’ve probably heard about the uncanny valley. It’s a term often applied to computer-generated or animatronic simulations of human faces. Our brains are acutely conditioned to recognise faces with their every nuance and motion. It would take something very special to fool us.

AI-authored articles often fall into a linguistic uncanny valley. Attempts at personality are injected jarringly, equivalent to writing “LOL” in the middle of a legislative document. Instead of a human voice shining through the words, there’s a perceptible artificiality to those written by a computer.

Image of mannequin faces that demonstrate the uncanny valley effect.
This, but in words.

How much does it matter?

If we read something and enjoy or learn from it, does it matter if a computer wrote it? What if it was only computer-aided? Platforms like ChatGPT can be very useful as idea generators.

Is it ok if the text is a piece of marketing blurb rather than an opinion piece? How about a social media post, or a response to? Can there be any value to fiction or poetry conjured through ones and zeroes?

Ultimately it’s up to each of us as individuals to decide how we feel about AI, but it’s hard to deny that authentic human content is going to become rarer. With that in mind, it can’t hurt to be able to tell the difference.