back to top
Science and TechnologyArtificial IntelligenceThe Impact of AI Content on SEO: How Does It Affect Site Performance?

The Impact of AI Content on SEO: How Does It Affect Site Performance?

Perhaps no single human capability has been as completely reproduced by artificial intelligence as the generation of text. After all, the dominant AI model that most people think of when they think of AI like ChatGPT, Gemini, and other state-of-the-art models is a Large Language Model, or LLM. While LLMs have demonstrated emergent capabilities, like writing code and executing other basic tasks on computers, at their core, they are designed as text generators. LLMs are essentially gigantic statistical calculators that are built to predict what word might follow each word it produces, given the context provided by the human input, or “prompt.”

On the surface, this might sound like an SEO windfall: unlimited generation of text at the tip of every marketer’s and MBA marketing program online graduate’s fingers. Being able to generate tens of thousands of words of text on command could easily help optimize website content. But it’s not quite that simple.

Consistency is Key

ai-consistency-is-the-key

While Google’s search algorithm isn’t public, one factor that any marketer worth his or her salt will know is that Google prioritizes evergreen content. Evergreen content is any content that remains relevant despite the passage of time, and while there may be various ways of defining what constitutes evergreen content from the standpoint of the code in Google’s algorithm, one thing that is obvious based on the results is that Google values content that produces consistent traffic over traffic that leads to sharp upticks in traffic that don’t last over time; as it turns out, this isn’t a strength of AI content. Whether people can tell the difference intuitively, or AI just doesn’t produce great content, the results have shown that AI-generated content doesn’t frequently produce the kind of content that keeps people coming back for more.

Take the case of Bonsai Mary, a website designed by SEO marketer Jesse Cunningham to test the effectiveness of AI-generated copy. Jesse populated bonsaimary.com with AI-generated content about houseplants and how to care for them. The website quickly skyrocketed through the search rankings, but just as soon as it had risen, it plummeted off the rankings – indeed, you can even search directly for Bonsai Mary now, and despite having a near-perfect domain, the site doesn’t even appear on the first SERP.

Martin Jeffrey, another SEO specialist from Black Lab Digital, conducted a similar experiment. Martin built a 15-page site full of SEO-optimized content generated by AI. Just like Jesse Cunningham and his Bonsai Mary page, the site quickly ranked near the top of Google SERPs within just a few months. But even faster than it had risen, his site crashed through the ranking floors, all the way down to zero views in the course of just a few days.

These dramatic fluctuations show us one key fact about Google’s algorithm, something we’ve known for a long time, but has become even more obvious over time: it doesn’t appreciate being gamed, and genuine, quality content is the only way to stay on top.

Bias is Bad

Artificial-intelligence-not-biased

One of the best things about computers and traditional, human-written algorithms is that, unless programmed otherwise deliberately, they are unbiased. Computers are, by nature, blind to race, religion, and sexual orientation – they’re just crunching 1s and 0s. While this indifference is one of the reasons we should perhaps be cautious about the unfettered pursuit of powerful AI technologies, it’s also, oddly enough, one that doesn’t apply to statistical number-crunching word-predictors, and for one very simple reason: they’re trained on human writing. 

Deep learning, the technique used to build most artificial intelligence models, consists of feeding lots of input data into an algorithm designed to produce statistical associations (whether they be visual, verbal, numeric, or any other type of input the model is trained to process) based on the data it is fed. One of the interesting things about deep learning models is that they scale well: they improve substantially, in some cases exponentially, in terms of the precision and quality of their output relative to the amount of input data they are “trained” on.

So, naturally, state-of-the-art LLMs that reproduce human-like quality of writing need to be trained on massive volumes of human-written text. As it turns out, most humans are biased in one way or another, and so the most popular collective biases we have leaked into the AI’s output data as a result of the statistical nature of the algorithm. The more content it is fed with certain characteristics, the more heavily those characteristics are represented in the statistical coefficients, known as “weights,” produced by training used to predict and output text, and, therefore, the more likely the model is to output content that reflects those biases.

Google doesn’t like bias.

Granted, most AI companies that offer their models for public use do a decent job of filtering out the worst content. But little things definitely slip between the cracks. It’s easy to conclude that the only way AI could be responsibly used for creating content without allowing baked-in biases to eke out a politically incorrect “oops” every once in a while is to use human editing and revising.

Quality is King

AI-quality-is-king
Maintain quality of the content

At the end of the day, there is one factor that we know will improve SEO by driving consistent traffic and ranking well through Google’s algorithm, and that’s producing high-quality, unique, informative, and relevant content. There is simply no replacement for it, and truth be told, AI just isn’t up to that task yet. Not only will it likely always fall short of human-generated content in terms of originality, purely as a result of being an “averaging machine” that tends to output the most middle-of-the-road, generic response, but it will also likely never match humans’ ability to communicate meaningfully with other humans, and imbue content with the level of relatability that people subconsciously look for when reading or interacting with content that they would presume, by default, to be crafted by a human. 

Originally observed in animation, the Uncanny Valley is a phenomenon used to describe the way people feel when they see an animated version of a human that comes very close to resembling an actual human but just isn’t quite right. As it turns out, people tend to react poorly to animated humans that are very close to seeming real but just a little bit off, as compared to animation that is not human. It’s theorized that we feel a certain revulsion towards these theoretically high-quality animations because, while our brain initially perceives them as humans, their unnatural movements may subconsciously tell us that there is something not quite right and that they are potentially untrustworthy. 

This same phenomenon has been observed in people’s reactions to AI-generated content, too: while don’t always catch on that AI-generated is, in fact, AI-generated, they often can just tell that something isn’t quite right. Seems like we have a good sixth sense for authenticity, and it serves us well when interacting with AI – at least for now.

All that is to say, AI-generated content will likely never contain the novel information or relatable writing characteristics that make for truly high-quality, engaging content, and that means AI content will be, for now, categorically inferior to genuine human content when it comes to SEO. Like it or not, when it comes to generating good content for websites, we’re here to stay!

Related

Follow The Eastern Herald on Google News. Show your support if you like our work.

Topics

Public Reaction

Editor's Picks

Trending Stories