If you’re old enough, you remember how amazing Google search was when it first came out and for the first few years. Excellent results, right at the top. Nowadays, it’s crap and half the time to find what I want I have to append “Reddit” or search very specific domains. (Reddit is likely to be worthless in a few years due to the IPO.)

Anyway, Google search results became crap for three main reasons, from least to most important:

  1. Worship of the official and the orthodox. Every time I search some medical issue, the top twenty sites tell me the same thing. That didn’t used to be the case, for cancer, for example, the old “cancer tutor” site would be on the first page. Maybe it’s good that the equivalent isn’t any more, but I wanted to read the alternative views as well as the orthodoxy.
  2. Monetization. Prioritizing selling ads over providing the best search results has had the effect one would expect.
  3. Organic link destruction. What made Google so good at the start is that its algo was almost entirely based on the number of incoming links a site had. Since the internet at that point was almost all human created, links were highly curated and genuine: someone had read the site, liked it and taken the time to link to it. Nowadays, most links aren’t organic: they’re SEO crap or advertising or intended to play the search algo, leading to an endless arm race. A link is no longer an endorsement and there’s no easy way around that: nothing can replace a human being reading a site, liking it, and linking to it.

Google, to put it simply, destroyed its own usefulness by destroying the internet ecosystem that had organic links, links by people who didn’t expect to be paid for them, to sites they found interesting whether those sites were official or orthodox or not.

Now, Large Language Model (LLM) AI is based off training on, basically, the entire internet. It’s essentially statistical. How good the AI is based on how good what it trained on is, with a lot of tweaking to try and point it towards stuff that isn’t horrible (not good, horrible, like avoiding racism.)

The problem is that over time more and more of the internet will be AI produces. AI will be feeding on its own output, rather than on organic human writing. It’s already been noticed that AI that eats its own dogfood tends to go nuts, and it’s fairly clear that AI is rather bland: it is a blender for what’s been done before.

So AI will, like Google, damage the very resources it needs to work well, especially since most people won’t admit that their AI content is AI, so it’s hard to avoid. It will slowly degrade over time, with some bumps of improvement as new advances are made.

Mind you LLM AI isn’t a general AI, it’s not smart, it’s just another algo. It doesn’t understand anything. Real AI will wait further advancements, if it every happens at all.

Still, enjoy it now while you can. I expect it’ll get better for two to three years, then degrade. That’s the optimistic view, there’s some evidence that the degradation is already underway.

You get what you support. If you like my writing, please SUBSCRIBE OR DONATE