Let’s lay out the big picture for LLM-style AI.
It is a statistical prediction of what should be the next word or symbol. That is why it required so much data to train and why, even if we had the tech, we couldn’t have created it 20 years ago: There wasn’t enough data in digital format. It is not intelligent. It is not conscious. It is just an algo trained with a TON of data and which used massive amounts of processing power (and thus electricity) to produce results. Hallucinations are part of the tech, they cannot be eliminated, which means that LLM “AI” will always make mistakes and many will almost certainly be the sort of mistakes trained humans rarely make.
The current build-out in the US involves only a few companies, all in a huge circle jerk, and they make up 40% of the entire public stock market’s value. Neither Open AI nor Anthropic actually make a profit, and it costs more to do a query than is made even from paying customers, let alone all the free ones. There is absolutely no question in my mind that they are in a bubble.
The maximalist claim for “AI” is that it will become so smart it can replace at least 40% of jobs. (Or smart enough.) The more realistic claim is that it’s good for some things and can replace some workers by making those who remain more efficient. Plus, after all, most tech companies don’t care if their products are shit as long as they make money. See Google for “who cares what you think, it’s us or no one. You’ll use our product no matter how shit it is.” (Ironically, Search is one of the few things AI is better at than incumbents.)
So here’s the thing: no matter whether AI is a real tech or not, it’s in a bubble. (The internet was real, it had a bubble.) No one actually knows who’s going to make money from AI. The big internet winners (Amazon, Facebook, Google) came after the dot-com bust. The Feds may backstop and/or bailout, if they do, it will hurt everyone not involved.
If I am wrong about AI and the maximalist claims are true then what will happen is a massive replacement of tens of millions of workers. Since those people will now have almost no income, that will lead to a classic demand depression. A great depression like the one in the 1930s. The only way out would be a massive guaranteed annual income. Given our rulers and ideology, we’d probably have food riots long before they realized they were risking their own throats.
If it is a real tech, but not that big a deal, it will lead to a shittier economy where even more mistakes are made, and it’s even harder to find a human being to fix anything. Which is what tech wants: they want everything automated and certainly they don’t want to have real customer service.
And, if it is a real tech, as I have noted before, China is actually going to win. Their models are 20 to 30 times cheaper to run, and are open source. If your business uses AI you will use open source if you have half a brain, because with open source one of two major providers (Anthropic/Open AI) can’t just raise prices or change the model. To use closed source would be so stupid that even most American CEOs will not do it. Certainly no one with sense outside American vassal swarm will be so stupid.
So:
- Maximal AI leads to a great depression.
- Moderate AI leads to a shittier economy and shittier projects.
- There’s a bubble either way
- At the end of it China’s AI models will be used far more than American ones anyway. The US has already “lost” the AI race and can’t even see that. (Why? Fundamentally because they’re greedy and want to become billionaires of trillionaires. Genuine open source AI won’t print nearly as many rich people.
America can’t win at anything that matters any more, because the people who lead America are stupid, liars and so greedy they can’t think of anything but money. (See Trump, who is the avatar of all these vices.)
This site is only viable due to reader donations. If you value it and can, please subscribe or donate.
Like & Subscribe
AI is a no win for any society. Any technological innovation has been a net zero or less than zero proposition in regard to its impact on societies. Eve never should have taken a bite of that apple.
Eric Anderson
So, do you think we meatspacers can put enough “AI Bubble” 1’s and 0’s into the digital ether that the LLMs begin algorithmically predicting their own demise?
Truly a snake eating it’s own tail.
Steve
The only things America excels at are sabotage, subterfuge and propaganda. We’re lightyears ahead of anyone else in these areas. America is banking on them and I wouldn’t count us out.
mago
Techno authoritarians = techno terrorists playing the short con. The AI scam is distorted, inhumane and destructive on multiple levels—environmentally, economically and socially.
We’re all losers in this game, but maybe maximum destruction is what’s needed to win in the long term. Lots of suffering along the way. As Tom Petty put it, there ain’t no easy way out.
Joan
Based on some recent experiences I am now wondering whether some tools are using AI even without advertising it. I say this because I use Deepl translate as I study a foreign language, and even the regular translator, not the AI tool, has started hallucinating at me.
I’ll copy-paste a sentence into the box, but the English that comes out will start with the translated sentence, and then just start riffing from it, adding a couple of sentences that weren’t pasted in and that often don’t make much sense.
spud
when i was younger in the early 1980’s. i had a computer program called S.A.M., it stood for software automated mouth.
after loading the program and starting it. you would type in what you wanted to hear, and the program would initiate the algo’s, and walla, your computer would repeat in words what you just typed in.
or i had dos program that would search the disk for what i wanted.
so was that AI:) because it sure sounds like what is being peddled today.
you would think some brilliant wall street analyst or someone from our towering investigative journalist from the legacy media, would mention we have seen this all before.
Soredemos
Calling these glorified chat bots Artificial Intelligence was a brilliant marketing coup.
In reality actual AI research hasn’t advanced much in decades. LLMs may end up being a component of a hypothetical true digital intelligence, but by themselves they aren’t doing anything even vaguely like actual thinking.
The way so many people have been suckered into thinking we need to start worrying about robot rebellions and Isaac Asimov themes is impressive. Those are theoretical philosophical questions that still have yet to be taken out of the realm of the purely hypothetical. ‘AI’ isn’t AI.
ibaien
one of the funny things about living in a deeply purple american city (san diego) is that everyone wants to talk, and everyone is resolutely against this future – left, right, jesus freak, anarchist, indifferent. there simply isn’t an AI constituency. kinda makes you wonder why we’re doing this to ourselves…
bruce wilder
Demolition gets a bad rap, I think, because it literally is destruction, but demolition is not itself failure. Demolition is the clearing away after failure. Failure is, well, failure. Failure is the bad thing, the source or cause of pathology. And, demolition is not logically necessary. Societies and people can refuse to acknowledge or demolish their failures and then live on among the ruins. A reason to do that, to live among the ruins, is that you cannot or will not think. If you cannot even imagine a feasible future — “feasible” is a key term; feasible as opposed to fantastic — then there is no reason to undertake demolition. “Preservation” becomes an intellectual refuge. Further elaboration of already failed systems is the usual human social response.
Several years ago, when anticipating the apocalypse was becoming fashionable, an anthropologist named Joseph Tainter wrote a book about the collapse of ancient civilizations as reflected in archaeological remains. His theory seemed to be that elites confronted with the incipient failure of elaborate and complex systems of social organization and control — systems that put the elites on top and in charge — would double-down on more elaborate and complex control, mining the foundational layers of pyramidal societies to feed further elevation of the apex. The people at “the bottom” literally producing food to feed the rest would be immiserated to obtain resources to elaborate complex, dysfunctional administration and ceremony.
GrumpyBcell
My hypothesis is that the “intelligence” of AI is the same smart shown by the AI peddlers. They are unable to recognize the problem because it is the same scam they have been so successful with.
Listening to a certain south african, it is clear they know the buzzwords and can sound smart to someone who doesn’t know the field. If you do know the field it is jargon and jibberish.
AI has a lot more snakeoil than actual medicine.
It all has to do with recovering lost investment any way they can.
different clue
@ibaien,
We aren’t doing this to ourselves. THEM are doing this to US. US would have to figure out how to tear down and destroy THEM in order to stop THEM from doing this to US. In the meantime, is there a way for US to sabotage, degrade, attrit, etc. THEM’s AI data centers, etc.?
By the way, I wonder how many people are really studying non-human NI ( Natural Intelligence)? Here is an example of RI ( Raven Intelligence).
” The bird even realized he had won 😅🥳 ”
https://www.reddit.com/r/nextfuckinglevel/comments/1owzes2/the_bird_even_realized_he_had_won/
Is there a way to open a path for the “evolving” augmentation of non-human NI ? What if birds had hands partway up their wings, so they could fly AND manipulate things? Here is an image suggestive of that possibility, called ” Chicken with a genetic defect. ”
https://www.reddit.com/r/oddlyterrifying/comments/rr7zl0/chicken_with_a_genetic_defect/
Behold the feet on the ends of its “wings”. What if we could tweak the genetic expression of some of the very smartest birds to get them to grow “hands” on their wings without compromising their ability to fly? Picture ravens, parrots, etc. with hands as well as wings and feet. How long before they start building their own civilizations?
What if we could change octopus metabolism and aging in such a way that octopuses could live for decades . . . multigenerational lifespans and evolving octopus societies? How long before such octopuses evolved and developed their own undersea civilizations?
KT Chong
It’s surprising the article doesn’t mention electricity and power, because energy cost and infrastructure are the #1 strategic bottleneck for AI at scale.
The U.S. already cannot match China on either. China isn’t just building—it’s overbuilding, with new hydro, nuclear, solar, and wind coming online every few months, and power supplies already exceeding demand. The U.S., by contrast, is hitting power ceilings. New infrastructure projects remain stuck in the planning phase. Even if the U.S. started building today, any major project or upgrade would take 5–15 years to complete, while AI demand grows exponentially.
In short: China has power overabundance—i.e., “overcapacity” and “overproduction,” as the West would complain about China as if it’s a bad thing—and timing on its side; the U.S. faces an energy bottleneck that will doom its AI ambitions.
different clue
I remember reading once a “true-life joke”. Some time after the fall and breakup of the USSR, some American computer engineers, architects and other computerologists inspected all the USSR computers and computer facilities they wanted to. At the end of it all, they said ( in a state of some surprise and upsetness) something like ” Why were you keeping such secrecy about your Computer Effort? Your computering is 50 years behind ours. What was the big secret?” The legacy-Soviet computerologist replied . . . ” That WAS the secret.”
But the NHI ( Natural Human Intelligence) of the various Soviet scientists, engineers, etc. was never in doubt. It was Soviet Engineering, after all, which invented the
AK-47.
So if AI fails here and is admitted to fail here, and takes down a lot of present Computer Capacity with it, Americans will be forced to turn back to NHI ( Natural Human Intelligence).
Wallflower
Different Clue:
There’s an additional punchline to your ( true) joke about the USSR concealing it’s backward computer capability (capaBITity?) during the Cold War: the West found it expedient to exaggerate the Soviet threat, to justify maximum investment in their own tech industry. This is even more true with respect to military infrastructure: the more “we”played up the Soviet menace, the more money could be shoveled into companies servicing that fear.
The fact that there was (and is) a very real nuclear threat to the world from both sides doesn’t change the money-grubbing math.
different clue
@Wallflower,
You’re right. I forgot about that follow-on to the true joke. I was more thinking about how it went to show how NHI ( Natural Human Intelligence) can be very intelligent. I read somewhere that the USSR’s physically backward level of computerization forced the Soviet computerologists to become very elegant and refined in their thinking about how to structure a problem so that their slow and weak computers could solve it in real time with useful results. Soviet mathematicians were supposed to be some of the best in the world.
CapaBITity? hmmmm . . . . maybe capaBITility? Maybe both can be tried and see where they go.
Meanwhile, there’s this: ” Meta’s top AI researchers is leaving. He thinks LLMs are a dead end”
https://www.reddit.com/r/technology/comments/1oygsx4/metas_top_ai_researchers_is_leaving_he_thinks/
Some of the comments in that article’s thread are good too.
Well, maybe Nelson Muntz has a message for Mark Zuckerberg . . .
https://www.youtube.com/watch?v=eOifa1WrOnQ