UpShift Logo UpShift Logo Symbol

As we enter the age of AI where computers can write like humans it's worth thinking about the computers that choose what we read.

I’ve been talking to a few friends and colleagues about my previous post on the future of content and generative AI.

Something that kept coming up was how much generative AI is already moulding the landscape of our search results and social media feeds.

How Search Results Lost the Plot

When search engines first appeared, their role was to help us make sense of human-generated content and find what we were looking for in a rapidly expanding internet. But at the core of search engines is a computer choosing what content you read.

In the beginning, this worked fine, but with the success of Google came the monetary drive to get your website to the top of the search results. This started a Search Engine Optimisation (SEO) battle between human content creators and the Google algorithm, with the humans trying to trick their way to the top and the algorithm trying to deliver quality results.

What’s changed is how the SEO war was won, in a way that both sides lost.

In the early 2020s the first AI generative writing services hit the mainstream, and suddenly the SEO-focused content generators could beat Google (and other search engines) at their own game. What understands computers better than other computers?

Automated services could generate content, track how it performed on search results and change the content accordingly. In short, you could teach a content-generating computer how to create content that search engine computers love. No more humans crafting copy, no more SEO wizardy; just let the computers have at it and watch your sites make it to the top of the search results… as long as your competitors aren’t doing exactly the same thing.

Of course, there's a big difference between "content" and genuine human thoughts, ideas and creativity. I'd like to think most people are more interested in the latter than content for the sake of itself.

How Computers Are Wrecking Social Media

I’ve already talked about how computers mistake conflict for engagement but they aren’t finished with mucking up your social media feeds.

If computers can generate content to exploit search results, they can do the same to social media.

Twitter bots are probably the most well-known villains, as Elon Musk famously tried to back out of his Twitter purchase when he found out how many users were non-human.

Bots can post, message and pretend to be human. Not all are malicious, but some are created with the aim of scamming people out of money, spreading misinformation or influencing public opinion.

Not only does this infect social feeds with bad-faith content, but people are targeted by bots with enough AI oomph to impersonate a person in a one-on-one conversation. Many of us are already used to treating direct messages from suspiciously attractive individuals as spam, but generative content is already in our feeds.

Whether someone is using a generative bot to write an “inspirational post” on LinkedIn, or a fully automated Instagram account is pumping out constant communication, the amount of genuine human interaction on social media is being watered down.

I’ve been online since I was a teenager in the late 80s and one of the core ideals was the democratisation of communication, where everyone could have an equal voice online. But the current reality is people with the resources to set up automated bots to spam, influence and misinform have a far greater say in online discourse. Even worse, people who think they are “speaking truth to power” have often been tricked into spreading lies from power.

Wrapping it Up

In having computers choose what content we read, we’ve made it so that content-generating computers are better at exploiting content-choosing computers than humans are.

As a result, we’re seeing a drift back to the early 2000s style of social media where communities of like-minded individuals congregate to discuss their passion. Reddit, Discord and messaging groups are good examples of this. When the communities are smaller and more focused, it’s easier for humans to spot computers pretending to be humans.

Community is incredibly important for humans. But in recent times we’ve let our sense of it slide, to the point computers have a surprising amount of control over what we read, who we interact with and even the “facts” we believe in.

How we’re going to untangle our current predicament remains to be seen, but the first step is acknowledging there’s a problem.

Banner Image by MidJourney

Contact Us

Get in touch

Think that we can help? Feel free to contact us.