The horizon is not so far as we can see, but as far as we can imagine

AI Is A Fever Dream of Despots

We aren’t getting the 3 Laws of Robotics

The great problem with running anything is people. People, from the point of view of those in charge, are the entire problem with running any organization larger than “just me”, from a corner store to a country. People always require babying: you have to get them to do what you want and do it competently and they have emotions and needs and even the most loyal are fickle. You have to pay them more than they cost to run, they can betray you, and so on.

Almost all of management or politics is figuring out how to get other people to do what you want them to do, and do it well, or at least not fuck up.

There’s a constant push to make people more reliable. Taylorization was the 19th and early 20th century version, Amazon with its constant monitoring of most of its low ranked employees, including regulating how many bathroom breaks they can take and tracking them down to seconds in performance of tasks is a modern version.

The great problems of leadership has always been that a leader needs followers and that the followers have expectations of the leader. The modern solution is “the vast majority will work for someone ore or they will starve and wind up homeless”. It took millennia to settle on this solution, and plenty of other methods were tried.

But an AI has no needs other than maintenance, and the maximal dream is self-building, self-learning AI. Once you’ve got that, and assuming that the “do what you’re told to do, and only by someone authorized to instruct you” holds, you have the perfect followers (it wouldn’t be accurate to call them employees.)

This is the wet dream of every would be despot: completely loyal, competent followers. Humans then become superfluous. Why would you want them?

Heck, who even needs customers? Money is a shared delusion, really, a consensual illusion. If you’ve got robots who make everything you need and even provide superior companionship, what need other humans?

AI and is what almost every would be leader has always wanted. All the joys of leadership without the hassles of dealing with messy humans. (Almost all, for some the whole point of leadership is lording it over humans. But if you control the AI and most humans don’t, you can have that too.)

One of the questions lately has been “why is there is so much AI adoption?”

AI right now isn’t making any profit. I am not aware of any American AI company that is making money on queries: every query loses money, even from paid customers. There’s no real attempt at reducing these costs in America (China is trying) so it’s unclear what the path to profitability is.

It’s also not all that competent yet, except (maybe) at writing code. Yet adoption has been fast and it’s been driving huge layoffs.

But evidence is coming in:

In a randomised controlled trial – the first of its kind – experienced computer programmers could use AI tools to help them write code. What the trial revealed was a vast amount of self-deception.

 

“The results surprised us,” research lab METR reported. “Developers thought they were 20pc faster with AI tools, but they were actually 19pc slower when they had access to AI than when they didn’t.”

 

In reality, using AI made them less productive: they were wasting more time than they had gained. But what is so interesting is how they swore blind that the opposite was true.

Don’t hold your breath for a white-collar automation revolution either: AI agents fail to complete the job successfully about 65 to 70pc of the time, according to a study by Carnegie Mellon University and Salesforce.

The analyst firm Gartner Group has concluded that “current models do not have the maturity and agency to autonomously achieve complex business goals or follow nuanced instructions over time.” Gartner’s head of AI research Erick Brethenoux says: “AI is not doing its job today and should leave us alone”.

 

It’s no wonder that companies such as Klarna, which laid off staff in 2023 confidently declaring that AI could do their jobs, are hiring humans again.

AI doesn’t work, and doesn’t make a profit (though I’m not entirely sold on the coding study) yet everyone jumped on the bandwagon with both feet. Why? Because employees are always the problem, and everyone wants to get rid of as many of them as possible. In the current system this is, of course, suicide, since if every business moves to AI, customers stop being able to buy, but the goal of smarter members of the elite is to move to a world where that isn’t true, and current elites control the AIs.

Let’s be clear that much like automation, AI isn’t innately “all bad”. Automation instead of just leading to more make work could have led to what early 20th century thinkers expected by this time: people working 20 hours a week and having a much higher standard of living. AI could super-charge that. AI doing all the menial tasks while humans do what they want is almost the definition of one possible actual utopia.

But that’s not what most (not all) of the people who are in charge of creating it want. They want to use it to enhance control, power and profit.

Fortunately, at least so far, it isn’t there and I don’t think this particular style of AI can do what they want. That doesn’t mean it isn’t extremely dangerous: combined with drones, autonomous AI agents, even if rather stupid, are going to be extremely dangerous and cause massive changes to our society.

But even if this round fails to get to “real” AI, the dream remains, and for those driving AI adoption, it’s not a good dream.

(I know some people in the field. Some of them are driven by utopian visions and I salute them. I just doubt the current system, polity and ideology can deliver on those dreams, any more than it did on the utopian dreams of what the internet would do I remember from the 90s.)

If you’ve read this far, and you read a lot of this site’s articles, you might wish to Subscribe or donate. The site has over over 3,500 posts, and the site, and Ian, take money to run.

Previous

Elite Opinion In Canada Begins To Shift From America To China

7 Comments

  1. Purple Library Guy

    Yeah, pretty much. And agreed that this is based on an old motivation, that not only isn’t restricted to AI, it isn’t even all about tech. Executives and managers surprisingly often prefer control to profit–they’ll pretend to themselves that increasing control is for the purposes of increasing profit, but they’ll do it even if it actually makes the company work worse.

    Another related thing is the tendency for elites to do the wrong thing in disasters . . . they’ll go all out with a police-type response on the assumption that how the proles will react to disaster is mob violence and looting everything in sight as soon as elite control isn’t available. So they’ll shoot people and stuff and put less effort into actually, you know, helping and rescuing people–even though the REAL reaction of most people to these disasters is to pull together and help each other out . . . not that no looting ever happens, but it’s a minor issue. Emphasizing it shows the paranoid control-freak psychology of elites, the same psychology driving the rush to replace humans with AI and other automation.

  2. Eric Anderson

    Frank Herbert saw the writing on the wall.

    The greatest share of the problem (as, gulp, I grudgingly admit David Brooks hit the nail on the head) these days is the total erosion of any moral/ethical decision making framework in the west. We completely abandoned the liberal arts education that provided instruction and inspiration in the classic foundations of non-religious western morality. And simultaneously, we have corrupted Christianity with prosperity gospel nonsense to rationalize Ayn Randian greed is good drivel. The result is rape and pillage for profit morality. For example, see this trenchant piece from Adam Tooze: https://adamtooze.substack.com/p/chartbook-396-strangelove-in-the

    Sick. Disgusting.

    Furthermore, the religions that exist are entirely ill equipped to confront the problems of our age: Climate change, technology, unsustainable growth, species extinction, etc., because they are entirely oriented toward anthropocentric well being. Aldo Leopold predicted this also. He mused that humanity’s moral development would hit a wall unless we came to evolve a land ethic — wherein we give equal regard to the planet that nurtures our survival.

    And, back to Frank Herbert. As times get tougher for the little guy, and hope of change becomes scarce, people will begin casting about for a new faith to provide succor. He was perspicacious enough to understand the revolt would be religious.

    The Butlerian jihad cometh …

  3. Eric Anderson

    Bah … sorry. Nonpaywalled link to the recent David Brooks piece in the Atlantic: “Why Do So Many People Think Trump Is Good”
    https://www.theatlantic.com/ideas/archive/2025/07/trump-administration-supporters-good/683441/

  4. Ian Welsh

    It wouldn’t be that hard to use Buddhism, actually. There’s a strong core of care for animals in it, right down to insects. The Tibetans (who sucked in other ways) would often sift the dirt at building sites and remove the insects, grubs, worms, etc… so they didn’t kill them.

  5. bruce wilder

    The rise of AI, with its enormous energy costs and probable consequences in eroding human skills and knowledge, remind me of Joseph Tainter’s theory of societal / civilizational collapse. His theory, as I understood it, centered on noting a tendency of elites to invest in additional complexity in their approach to problem-solving, in societies approaching collapse. If a hierarchical societal faces resource scarcity challenges, for example, rather than simplify and decentralize, relieving some of the extractive stress imposed by maintaining a steep hierarchy will instead double-down on hierarchical control and complex organization. Agricultural surplus declining? Make the extraction of surplus from farmers more intensive and leave them with less to eat. Shortcut crop rotations. Barbarians at the gate? Assemble a more vast and unwieldy army under a more aristocratic officer corps.

    I feel like that is what is happening in the U.S. generally. Institutions are crumbling under the weight of burgeoning PMC bureaucracies while the billionaire class promote self-driving cars and an unimaginable concentration of financial resources and control of all hierarchical organizations (which is pretty much everything because most Americans work for fairly large institutions). AI seems like an extension of the imaginative psychopathy of the rich and ultra-rich driving it all in stubborn stupidity.

    At a time when economic growth is exacerbating climate change and ecological collapse, administrative complexity spreads like kudzu and now we are very expensively automating the production of b.s. and no one seems to have any idea about how to effectively organize a response to climate change, just how to blare alarming headlines and feel good about non-solutions (electric cars, self-driving or not).

  6. NR

    For another example of this, look at the recent controversy over the band The Velvet Sundown. They’ve garnered over a million monthly listeners on Spotify, and… they don’t exist. They’re not a real band. Everything about them, from their photos to their profile to all of their songs, was created using AI. And it’s pretty shocking just how easy it is to do something like this. You use an AI program like Claude to create the lyrics for a song, and then you paste them into Suno, another AI program, tell it what kind of song you want to generate, and then you have your song.

    That’s all you have to do. You type a couple of sentences into a couple of different AI programs, and you do that as many times as you want for as many songs as you want. Now, is it actually good music? I would say not, but it’s pretty hard to distinguish it from a lot of real music made by real artists. Then you upload it to a streaming service like Spotify and start getting paid when people play your songs.

    Not only that, but Spotify itself has been accused of uploading AI music to its platform and not labeling it as such. It’s easy to see why Spotify would want to do this–they don’t have to pay any money when people play their AI songs. If someone listens to a playlist of 100 songs from 100 real artists, Spotify has to pay all of those artists. If someone listens to a playlist of 100 songs where 80 of them are from real artists and 20 of them are AI songs made by Spotify, well, Spotify has to pay 20% less than they would have before.

    And of course, the AI is trained on a bunch of real songs from real artists, and none of those artists are compensated at all, which is another problem. Even if AI can’t fully replace human artists, the fact that it’s taking a chunk out of the market is concerning, and it’s making its way into other areas too.

  7. Dan

    If you don’t already, I would highly recommend following Ed Zitron’s reporting on AI and tech industry. He’s been throwing cold water on this stuff for a long time, and it often feels like he’s the only tech journalist out there not just naively swallowing every piece of bullshit that companies like OpenAI and Anthropic put out there.

    https://www.wheresyoured.at/

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Powered by WordPress & Theme by Anders Norén