The horizon is not so far as we can see, but as far as we can imagine

Tag: “AI”

Why Technocratic Elites Aren’t Trusted (Sam Altman Edition)

So, Sam Altman recently said something which seems reasonable, but isn’t:

using technology to create abundance–intelligence, energy, longevity, whatever–will not solve all problems and will not magically make everyone happy. but it is an unequivocally great thing to do, and expands our option space. to me, it feels like a moral imperative...

most surprising takeaway from recent college visits: this is a surprisingly controversial opinion with certain demographics.

(lack of capitalization from original.)

Back in the original Greek writings on rhetoric and argument, one of the three steps was Ethos: this is the rhetorician’s qualifications, including his ethical qualifications. The “why should we listen to you?” part. If you’re talking about courage, are you brave? If charity, are you charitable?

If technology, do you use it for the good of others?

What most people can’t explicate about their objection to Altman’s thesis that using technology to create abundance is a good thing is that they don’t trust Altman. OpenAI was originally a non-profit, meant to create AI in a way which would benefit everyone. Altman turned it into a for profit, and no one except billionaires and sycophants think that companies are out to be beneficial to the majority of people: we work in them, we know it’s bullshit.

And how did Altman create his AIs? By training them on other people’s work, without permission or payment. Further, the AIs compete with the people whose data they trained on: you can ask for a picture in the style of a particular artist, for example, and they compete with artists, writers and other professionals in general.

So, the people whose actual work made AIs (they aren’t really AIs but I use the term for convenience) possible, are the ones harmed by them AND they didn’t give their permission or get paid.

Why they hell would anyone other than a shareholder or a well paid employee “trust” Sam Altman?

Now let’s move on to the Altman’s actual argument (his logos and pathos)

using technology to create abundance–intelligence, energy, longevity, whatever–will not solve all problems and will not magically make everyone happy. but it is an unequivocally great thing to do, and expands our option space. to me, it feels like a moral imperative...

Now, this is a case where the logos is almost entirely true.

But what’s the actual track record of using technology to create abundance?

We’re losing our topsoil. Nutrition in food is less than it used to be. We’ve created climate change, which appears to now be past key tipping points and will kill and impoverish billions. Most of the American population is fat, they weren’t fifty years ago, so it’s not “individual choices.” We have widespread ecological collapse, including the loss of most large mammals and so few insects compared to even fifty years ago that there is no longer “bug splat” on windshields. The oceans are full of plastic, and the coral reefs are dying, while fish stocks collapse.

None of this is to say that technology hasn’t had vast benefits, but we’re using it also to reduce our option space: to damage the carrying capacity of the Earth in ways which will take tens of thousand of years to recover from, as a best case estimate (millions for some of the issues.) The last 40 years, when people like Altman have had the most influence, have seen a vast rise in inequality, and a huge number of homeless. Altman and co. blame left wingers, but who are the billionaires? Who actually has the power?

Altman’s making an argument which is true on its face, but he belong to a class of people whose actions do a great deal of harm. Most people can’t clearly articulate this, but they know he and his class can’t be trusted, so they instinctively disagree with him, but since they can’t quite say why, they sound incoherent.

But they’re right to distrust Altman. Technology could be used to benefit everyone, even in the long term, but Altman isn’t trying to do that: he’s trying to get rich, and if that hurts a lot of people along the way, he’s OK with it.

You get what you support. If you like my writing, please SUBSCRIBE OR DONATE

GIGO: The Past of Google Search Is the Future Of LLM AI Models

If you’re old enough, you remember how amazing Google search was when it first came out and for the first few years. Excellent results, right at the top. Nowadays, it’s crap and half the time to find what I want I have to append “Reddit” or search very specific domains. (Reddit is likely to be worthless in a few years due to the IPO.)

Anyway, Google search results became crap for three main reasons, from least to most important:

  1. Worship of the official and the orthodox. Every time I search some medical issue, the top twenty sites tell me the same thing. That didn’t used to be the case, for cancer, for example, the old “cancer tutor” site would be on the first page. Maybe it’s good that the equivalent isn’t any more, but I wanted to read the alternative views as well as the orthodoxy.
  2. Monetization. Prioritizing selling ads over providing the best search results has had the effect one would expect.
  3. Organic link destruction. What made Google so good at the start is that its algo was almost entirely based on the number of incoming links a site had. Since the internet at that point was almost all human created, links were highly curated and genuine: someone had read the site, liked it and taken the time to link to it. Nowadays, most links aren’t organic: they’re SEO crap or advertising or intended to play the search algo, leading to an endless arm race. A link is no longer an endorsement and there’s no easy way around that: nothing can replace a human being reading a site, liking it, and linking to it.

Google, to put it simply, destroyed its own usefulness by destroying the internet ecosystem that had organic links, links by people who didn’t expect to be paid for them, to sites they found interesting whether those sites were official or orthodox or not.

Now, Large Language Model (LLM) AI is based off training on, basically, the entire internet. It’s essentially statistical. How good the AI is based on how good what it trained on is, with a lot of tweaking to try and point it towards stuff that isn’t horrible (not good, horrible, like avoiding racism.)

The problem is that over time more and more of the internet will be AI produces. AI will be feeding on its own output, rather than on organic human writing. It’s already been noticed that AI that eats its own dogfood tends to go nuts, and it’s fairly clear that AI is rather bland: it is a blender for what’s been done before.

So AI will, like Google, damage the very resources it needs to work well, especially since most people won’t admit that their AI content is AI, so it’s hard to avoid. It will slowly degrade over time, with some bumps of improvement as new advances are made.

Mind you LLM AI isn’t a general AI, it’s not smart, it’s just another algo. It doesn’t understand anything. Real AI will wait further advancements, if it every happens at all.

Still, enjoy it now while you can. I expect it’ll get better for two to three years, then degrade. That’s the optimistic view, there’s some evidence that the degradation is already underway.

You get what you support. If you like my writing, please SUBSCRIBE OR DONATE

Google Neural Net “AI” Is About To Destroy Half The Independent Web

As various folks have quipped the safest place to hide a body is on the second page of Google search results, because no one goes there.

Google is about to role out its “AI” for search (I’ll be saying AI in quotes as policy when referring to neural nets because they aren’t intelligent) and if it stays as it is it’s going to destroy most sites that provide information or analysis. (I’ll feel some hit, but will survive as I have my own audience.)

That screen-shot is the kicker. It takes up too much of the page. Worst, people don’t like to click, so if Google presents the info they want, they’ll just stay on Google.

Now, of course, Google is summarizing data that the neural net has scraped from the Web, much like when you used to read some books then summarize them for your term paper. None of the information Google’s “AI” will present in answer to questions is information from Google, it’s scraped, swallowed and regurgitated from the websites which won’t be getting the traffic any more, who will then die. The perfect parasite.

There’s going to be lawsuits, and I’m no lawyer, but my understanding is that just as if you do your research and re-write to summarize this probably doesn’t fall under current copyright law. That law is entirely reasonable, for people, but for neural nets it seems like a huge gap, but without a change in the law, it seems unlikely there’s a legal remedy.

I’m thinking about this. I may decide to keep most of my site off search engines (which is a problem in the sense that I use search engines to find my own articles, I’ve written so many).

But in the larger sense “AI” is a giant parasite (well, Google won’t be the only one) devouring other people’s expertise and denying them a living. Google controls about 45% of the internet ad market already with most of the rest divided up between various social median giants, and doing so destroyed a vast swathe of sites. Now they are set to kill much of what remains.

Tacitus’s line, supposedly quoting Calgacus, about the Roman Empire, was that the Romans “made a desert and called it peace”, Google and “AI” is making an internet wasteland and calling it profits.


My ability to write these articles depends on donors and subscribers so if you value this writing, please DONATE or SUBSCRIBE.

Powered by WordPress & Theme by Anders Norén