The zombie internet apocalypse
The dead internet theory is a conspiracy theory, which in itself is enough reason to doubt it. Like many fashionable conspiracy theories it originated in regions of the internet frequented by people with really unpleasant views, which is another.
But.
The theory really says two things:
- almost everything on the internet is now produced by machines, not humans;
- this is being intentionally engineered by shadowy elites for reasons.
The second of these is the usual paranoid conspiracy nonsense. The first is a lot more interesting. In fact you can say something even stronger than the second thing.
The zombie internet theory. A large and increasing amount of the content on the internet is produced and consumed by machines, not humans.
This is interesting because it’s testable, because when you do test it you find it appears to be true, because it’s easy to see why it is happening and finally because of where it probably leads.
You don’t have to look far to see that quite strange things are happening. Facebook is full of posts containing repetitive and obviously machine-generated images, with tens of thousands of likes. Internet searches, especially for common topics, turn up increasing amounts of repetitive trash which I used to think was generated by people being paid by the word in low-wage economies but is probably now also generated by machine. When you read this stuff you realise pretty quickly that they don’t have an answer to what you’re looking for (or if they do, they’ve stolen it from somewhere else) and that the real purpose of the whole thing is simply to make you spend time reading it.
And the amount of this junk is increasing rapidly: I’m fussy about the correct use of the term ‘exponentially’, but it may be increasing exponentially. And exponential processes are things you shouldn’t ignore.
It’s pretty easy to see what is enabling this: the big neural-network systems people now call AI have made it incredibly cheap to generate huge amounts of this content. And other similar systems are busy posing as humans and commenting on it, liking it or whatever.
These systems are passing the Turing test: they’re persuading people that they’re humans. But, well, it’s arguable that ELIZA passed the Turing test: it isn’t, it turns out, a very interesting test. These systems, despite all the breathless hype, are not actually intelligent. The current AI bubble is just that, and I suspect it’s mostly driven by the thing that drives most bubbles: a group of people extracting money from the gullible while pretending to believe, or actually believing, in some absurd story. We’re not all going to be destroyed by some superintelligent AI. Better hope your pension fund hasn’t invested too heavily in the bubble. But that’s not the subject of this post.
A more interesting question is why is this happening? There are two obvious reasons: propaganda and money.
Everyone, by now, knows about Russian troll farms and other similar things. These were once, probably, made up of people whose job was to troll. Now they’re machines whose job is to troll. And the machines don’t eat or sleep so they can troll a whole lot more than the people could. That’s not going to end well, but it is also not the subject of this post.
What about money? Well, money is the subject of this post. It’s about money because the internet is built on advertising: Google isn’t a search engine company, it’s an ad broker; Facebook isn’t a social media company, it’s an ad broker; Twitter isn’t whatever Musk wants it to be, it’s a dying ad broker. And the way these companies get their customers, the advertisers, to pay them is by counting how many times humans look at the adverts they’ve placed together with some estimate as to how likely those humans are to buy things as a result. That’s why they need to harvest the souls of their users: so they can present the adverts to them which are most likely to make them spend money on the products being advertised. And some of the money they take from advertisers goes to the people creating the content, or some of them, as an incentive to keep creating content.
So what’s happened is that, firstly, people have realised that machine-generated content means that humans spend more time looking at advertising. They probably spend less time on each individual bit of machine-generated junk, but they have to wade through far more of it to get to what they’re actually after than they did before it existed. And the people behind the machine-generated content get paid by the ad broker. And it’s very, very cheap to make so the amount of it is exploding.
The ad brokers don’t care about this first thing at all. In fact they like it: they don’t care if the internet is enshittified for the mere users, so long as the money keeps pouring out of it.
But secondly people have realised that it doesn’t have to be a human who looks at the adverts: it can be another machine. So long as the machine can persuade the advertising system that it’s a human the advertisers pay for the view and the people behind the content get paid. If it can persuade the advertising system that it’s a human who is likely to end up buying whatever is being advertised the people behind the content get paid even more. So they spend time making the machines seem like they’re humans who are likely to buy things, and so we have a tide of machine-generated content getting tens of thousands of likes from other machines. Content made by zombies being viewed by other zombies.
Superficially the second trick might seem to be in the ad brokers’ interests too: there are more views of the advertising from ‘humans’ and even from ‘humans who are likely to buy the thing being advertised to them’.
But it is not in their interests, or not in the long term. Machines don’t buy things, after all, so the advertisers are throwing their money away. The whole vast pyramid that is the internet in 2024 rests on a foundation of the money made by advertisers who sell actual products to humans. Unless the advertisements they place pay for themselves in increased sales they will stop advertising in those places. And the pyramid will fall.
So they care, or should care, very much about the second thing: if advertisers stop paying them the ad brokers are in trouble. They’re in worse trouble because their current market capitalisations are so inflated: when the bottom falls out it has a long way to fall.
So I am sure the ad brokers have been working very hard to detect when a machine, rather than a human, sees an advertisement. But these are machines which are passing the Turing test: they’re fooling humans that they’re human, so I’m pretty sure they’re going to be able to fool other machines. And the ad brokers have already enshittified their services so much that real humans are beginning to leaving in large numbers: they’re in a bad place.
I think the zombie internet apocalypse is coming. If I was an investor I’d be shorting the ad brokers.