The horizon is not so far as we can see, but as far as we can imagine

Category: Science and Technology Page 1 of 2

Is Consciousness Reality’s Organizing Principle? (Beyond Biocentrism, by Lanza and Berman)

Quantum mechanics is seriously weird. The majority of us have a model of the world based primarily on Newtonian physics. We believe in cause and effect. The universe is a giant machine following laws, and if there wasn’t a single conscious being in it, those laws would still be the same.

But in quantum mechanics particles like photons don’t exist as particles until observed. If a photon is given the choice of two paths, it takes both as a wave, but if measured and observed to see which path it took, it then takes only one.

The key point here is observed. If a measurement is in the past, the photon doesn’t choose which particle path it took until observed. (It may decohere, it doesn’t choose which way to decohere. Or that’s Lanza and Berman’s argument.)

Schrodinger’s cat is an attempt to scale this up to macro, and to show how absurd it is. “The cat is both alive and dead.” (It doesn’t really work, because the cat is conscious and observes.)

Lanza has written a series of books on Biocentrism, each more extreme than the last. Beyond Biocentrism is the third in the series.

Biocentrism takes the quantum physics at its face and tries to extend the consequences. It argues that nothing really exists except in potentiality (a range of possibilities) until it is observed by something that is conscious. This doesn’t have to mean a human, presumably any conscious being will do the job. Lanza discusses bird and fish and bats and dogs, all of whom observe the world differently than them, but I’d point out that evidence is coming in that at least some plants (almost certainly trees) are conscious. Perhaps single celled entities are, and we keep finding those in places like Mars and the subsurface oceans of moons and so on.

Lanza notes that the conditions for life, especially Earth life, are very specific. From atomic constants to the moon impacting the Earth in just the right way and winding up not orbiting the equator, nor destroying the Earth, the odds against a garden world like ours are astronomical. Even the odds of a universe existing which allowed for life in theory are astronomical.

Biocentrism resolves this by putting consciousness first. Concrete reality is formed by consciousness, so physical laws must confirm to what is required for life, since it is biological life which gives rise to consciousness. The odds go from astronomical, to “they had to support life, so they did.”

Lanza’s interpretations of the consequences of quantum mechanics or even of quantum mechanics itself aren’t always orthodox. For example, there’s a delayed choice experiment called the quantum eraser, in which finding out something in the future seems to change the past.

While delayed-choice experiments might seem to allow measurements made in the present to alter events that occurred in the past, this conclusion requires assuming a non-standard view of quantum mechanics. If a photon in flight is instead interpreted as being in a so-called “superposition of states“—that is, if it is allowed the potentiality of manifesting as a particle or wave, but during its time in flight is neither—then there is no causation paradox. This notion of superposition reflects the standard interpretation of quantum mechanics.

Lanza interprets this as “no the change actually occurs in the past and there is a causation “paradox”, though in biocentrism it’s not a paradox, since consciousness is primary.

I don’t claim to know who’s right about this. Hopefully an experiment will be devised which resolves the issues. But Lanza brings it up in part to rescue free choice.

As you may be aware, experiments show that by the time we become consciously aware of making a decision, the decision has already been made. Biologists can tell that we’ll do something before we believe we’ve made the decision. Since neural activity is fundamentally quantum, Lanza attempts to rescue free will by suggesting that the decision is indeed made when we believe we did, it’s just that it changes the past thru the act of observation.

Without something like this, we are, in fact, biological machines and free will is an illusion. Blaming or taking credit for anything you have ever done, or anything you are, is ludicrous. You are just a cause and effect machine and your idea that you’re in control of any of it is an illusion. (Why that illusion should exist is an interesting question.)

I don’t consider myself qualified to judge Lanza and Berman’s work on Biocentrism. It might be substantially right and it might not be. But I do think he makes a good case that the science (which he describes at great length, including having appendices with the math) doesn’t allow us to cling to Newtonian or even Einsteinian views of the universe or our place in it. Something weird is going on when consciousness is required to cause packet collapse. Indeed, he even includes one experiment where the effect was scaled up to macro, though still a very small macro.

The world is strange. Far stranger than the still reigning consensus “folk” models suggest, and while biocentrism may not be correct in all its details, it’s worth reading and considering, because it takes quantum mechanics weird results seriously and tries to reason from them, rather than around them in an attempt to preserve as much of the older systems as possible.

At the same time, we must always be wary. After all, post-Newton very few people outside of some religions would have argued against a clockwork universe, and it turned out that informed opinion was, well, wrong. (Which doesn’t mean God made the universe in 7 days or any such nonsense.)

Still, this is the cutting edge, and we know at the very least that it puts a few nails in the clockwork universe’s coffin and at least a couple into the relativistic universe. To ignore it, and to pretend that consciousness isn’t much more important than we thought it was is head in sand style thinking. And Lanza isn’t some quack. His interpretation may be unorthodox, but he understand the science.

I think this, or one of the other Biocentrism books is very worth reading. Even if you wind up not buying the whole package, you’ll be forced to rethink what you “know.”

***

If you’ve read this far, and you read a lot of this site’s articles, you might wish to Subscribe or donate. The site has over over 3,500 posts, and the site, and Ian, take money to run.

Why China’s Big On Open Source

Yesterday we discussed Chinese vs. American AI. The big difference is that a lot of China’s AI is Open Source. Not just Deepseek, but:

In addition to Baidu, other Chinese tech giants such as Alibaba Group and Tencent have increasingly been providing their AI offerings for free and are making more models open source.

For example, Alibaba Cloud said last month it was open-sourcing its AI models for video generation, while Tencent released five new open-source models earlier this month with the ability to convert text and images into 3D visuals.

Smaller players are also furthering the trend. ManusAI, a Chinese AI firm that recently unveiled an AI agent that claims to outperform OpenAI’s Deep Research, has said it would shift towards open source.

Nor is it just in AI. An emphasis on open source isn’t just a private matter, it’s in the latest five year plan.

And that is makes sense. Open Source has the great advantage that it’s not subject to geopolitical risk. The US can’t cut countries off that use open source. It also has the advantage that private actors can’t squeeze you nearly as much. If you’re using proprietary tech, whoever is using it can raise prices or stop selling to you.

Moreover, non-Western customers are more likely to products built on open source, again, because it’s much freer or geopolitical or private squeeze risk.

But probably the most important thing is that Open Source and open standards speed up innovation. Anyone who wants to can build on them without paying exorbidant fees, or without simply being locked out by patent or copyright concerns. If you actually want rapid advancement in tech, and China does (the US, overall, does not) then open source makes sense. The original intention of patents was to get inventors to share, not to lock in long term profits. Patents were usually granted for relatively short terms.

The great differences between American and Chinese leadership, both private and public, is that they genuinely do think strategically and long term, and that Chinese leaders care (or, at least, in many more cases act as if they care) about China, not just their own companies or themselves. There is a unifying vision for the country, a true belief in technological advancement and a belief that technology can be used to help ordinary people. I remember seeing a cartoon on AI where in America its used to get rid of artists and writers and in China it’s used to free people up so they can be artists and writers.

Who knows if it’ll work that way, but the “Jetsons” future assumes that tech is meant to do things for us so we can enjoy life more, not so that more and more people can be made poverty stricken, and China has that spirit.

When you believe in technology and science, truly, as for the public good and not just for private profit, well, you wind up leading the rest of the world in 80% of techs.

And soon it will be 90%.

 

If you’ve read this far, and you read a lot of this site’s articles, you might wish to Subscribe or donate. The site has over over 3,500 posts, and the site, and Ian, take money to run.

Why China Is Going To Win The AI Race

When you look at AI, right now, it has one major use case that people are really willing to pay for: coding. That means Cursor and, to a lesser extent,  Replit. Let’s take Cursor as an example: it is built on top of other companys AI.

This is a problem, because Cursor doesn’t have a service to sell without making calls to other company’s AIs and those companies can raise prices and Cursor has to eat it.

As Zitron notes, this is what actually happened recently:

A couple of weeks ago, I wrote up the dramatic changes that Cursor made to its service in the middle of June on my premium newsletter, and discovered that they timed precisely with Anthropic (and OpenAI to a lesser extent) adding “service tiers” and “priority processing,” which is tech language for “pay us extra if you have a lot of customers or face rate limits or service delays.” These price shifts have also led to companies like Replit having to make significant changes to its pricing model that disfavor users….

  • On or around June 16 2025 — Cursor changes its pricing, adding a new $200-a-month “Ultra” tier that, in its own words, is “made possible by multi-year partnerships with OpenAI, Anthropic, Google and xAI,” which translates to “multi-year commitments to spend, which can be amortized as monthly amounts.”
  • A day later, Cursor dramatically changed its offering to a “usage-based” one where users got “at least” the value of their subscription — $20-a-month provided more than $20 of API calls — in compute, along with arbitrary rate limits and “unlimited” access to Cursor’s own slow model that its users hate.
  • June 18 — Replit announces its “effort-based pricing” increases.
  • July 1 2025 — The Information reports Anthropic has hit “$4 billion annual pace,”  meaning that it is making $333 million a month, or an increase of $83 million a month, or an increase of just under 25% in the space of a month.

In other words, Anthropic, which still isn’t making money even now, increased its prices and Cursor and Replit were forced to pass those price increases on to their customers, and made their products worse.

American AI isn’t profitable. Each call costs more than anyone is charging their customers. And since there are very few AI models (OpenAI, Anthropic and X, basically), anyone who uses these services is subject to having prices suddenly increase. Indeed, since none of these companies is making money, it’s hard to see how anyone could expect anything but price increases.

Now here’s the thing about Deepseek, a Chinese AI. Its run costs 97% less than American AI. You’d think that American AI companies, seeing this, would have looked at how Deepseek did it, but they aren’t, they’re piling on the spending and costs.

And here’s the second thing: Deepseek is open source. You can run it on your own servers and you can build on it.

So: 30x cheaper and you can’t be hit with sudden but entirely to be expected price increases.

Why would you use American AI? (No, it’s not that much better.) The only real reason is legal risk: America wants to win the AI race and it’s willing to use sanctions to do so.

But if you’re in a country outside the Western sphere you’d be insane to use American AI. Absolutely nuts. And even if the Western sphere, building off American AI is incredibly risky.

So Chinese AI is going to win. Sanctions may slow it down, but open source and 30X cheaper is one hell of a combo.

It didn’t have to be like this. OpenAI wasn’t supposed to be a for profit enterprise and Deepseek’s methods of lowering costs could be emulated. But that doesn’t seem to occur to American AI companies.

American tech is completely out to lunch. Absolutely insane. A thirty time cost differential is not something you can just ignore, nor is the fact that American AI companies absolutely will have to raise prices, and raise them massively.

So, yet again, China is going to win, because American corporate leaders are, apparently, morons.

If you’ve read this far, and you read a lot of this site’s articles, you might wish to Subscribe or donate. The site has over over 3,500 posts, and the site, and Ian, take money to run.

AI May Be The End Of Serious Human Advancement

To set the stage, this comment from GM:

AI music is what really spooked me about the whole thing. I work in a very technical field and I have yet to see AI be useful for anything in it, because it just doesn’t truly know, and most importantly, UNDERSTAND anything at a level approaching a human expert. But then since early 2025 or so the AI-generated music started to be pretty hard to distinguish from the real thing, and making music is quite a complex thing.

You can still kind of hear it’s AI in the vocals, as those have a certain hiss/distortion to them, but instrumental music alone is pretty damn indistinguishable from what humans record.

Is it great? It never reaches the heights human music does, especially when it comes to the highly technical extremes.

But most human-made music doesn’t either.

And from what I’ve heard from AI, it makes truly awful music at a lower rate than humans do. It produces a lot of average-to-good, while humans mostly generate average-to-bad.

Which is not good news for humans, because most popular music is not all that complex at all (and has in fact been getting more and more simplified over time). With further improvements in AI, the average listener, who never cared all that much about music anyway, won’t either be able to tell or care much about the difference.

That will have a perverse second-order effect — humans will be discouraged from going into that line of work, because what is the point, you can’t make a living out of it. Sure, there will be live bands touring (although even there you can imagine at one point having AI bands “playing live” as holograms, no humans involved), but the market for highly skilled studio musicians and engineers will largely evaporate.

And that will have a devastating effect on the quality of music in the future, because good music comes from those people, and musical innovation comes from such highly skilled musicians improvising in the studio. Maybe one day AI will be so smart and advanced it will be able to jam on its own and come up with new ideas, but as it is structured right now, it just provides new variations of patterns it has already been trained on, not anything new.

Thus the short- to mid-term future is quite bleak. Already there was a rather bad problem with stagnation in music — not much really new in terms of fresh ideas has appeared for quite a while, which trend coincided with the transition to using computers for making music. Now with AI? Well…

Here’s the thing: AI isn’t creative. As GM says it offers variations on already existing methods or paradigms. It’s reliant on scooping up an entire volume of work on subjects, but it can’t advance to new paradigms. In other words, AI is (potentially) great for solved paradigms. It doesn’t, yet, work in all fields because it lacks judgment, but it works in some areas, at least well enough if mediocre is good enough, which, let’s be honest, it often is.

The problem is that the ladder of most careers is “learn how to do what’s already be done, then do variations on that, then start creating new stuff.” Most people never move much beyond the first two stages, and if they do they often create only one or two really new things.

As GM points out, AI is going to cut out the first step and in many cases (music being his example) the second step. That means that step three “create actually new stuff” won’t happen very much, because AI can’t do it (not this form of “AI” anyway, because it doesn’t actually understand anything it’s spewing) and there will be hardly any new practitioners, since they can’t make a living during the “learn old stuff” and “variations on old stuff” phases. Those aren’t fast phases. The 10K hours/10 years paradigm isn’t technically correct, but it does take many years to master the old stuff in a field and reach the level of mastery required to create new paradigms.

Add this to the fact that studies coming in are showing that using AI degrades the skills and reasoning ability of people who use it and you have a dismal picture: we hand over to AI our culture, and AI is unable to advance it, but reliance on AI makes it impossible for us to advance it because we no longer produce the people who can do so.

Not a pretty picture. (Also will be forestalled by civilization collapse, but means we are even more likely to be unable to avoid civilization collapse.)

More on civilization collapse and “AI” soon.

If you’ve read this far, and you read a lot of this site’s articles, you might wish to Subscribe or donate. The site has over over 3,500 posts, and the site, and Ian, take money to run.

AI Is A Fever Dream of Despots

We aren’t getting the 3 Laws of Robotics

The great problem with running anything is people. People, from the point of view of those in charge, are the entire problem with running any organization larger than “just me”, from a corner store to a country. People always require babying: you have to get them to do what you want and do it competently and they have emotions and needs and even the most loyal are fickle. You have to pay them more than they cost to run, they can betray you, and so on.

Almost all of management or politics is figuring out how to get other people to do what you want them to do, and do it well, or at least not fuck up.

There’s a constant push to make people more reliable. Taylorization was the 19th and early 20th century version, Amazon with its constant monitoring of most of its low ranked employees, including regulating how many bathroom breaks they can take and tracking them down to seconds in performance of tasks is a modern version.

The great problems of leadership has always been that a leader needs followers and that the followers have expectations of the leader. The modern solution is “the vast majority will work for someone ore or they will starve and wind up homeless”. It took millennia to settle on this solution, and plenty of other methods were tried.

But an AI has no needs other than maintenance, and the maximal dream is self-building, self-learning AI. Once you’ve got that, and assuming that the “do what you’re told to do, and only by someone authorized to instruct you” holds, you have the perfect followers (it wouldn’t be accurate to call them employees.)

This is the wet dream of every would be despot: completely loyal, competent followers. Humans then become superfluous. Why would you want them?

Heck, who even needs customers? Money is a shared delusion, really, a consensual illusion. If you’ve got robots who make everything you need and even provide superior companionship, what need other humans?

AI and is what almost every would be leader has always wanted. All the joys of leadership without the hassles of dealing with messy humans. (Almost all, for some the whole point of leadership is lording it over humans. But if you control the AI and most humans don’t, you can have that too.)

One of the questions lately has been “why is there is so much AI adoption?”

AI right now isn’t making any profit. I am not aware of any American AI company that is making money on queries: every query loses money, even from paid customers. There’s no real attempt at reducing these costs in America (China is trying) so it’s unclear what the path to profitability is.

It’s also not all that competent yet, except (maybe) at writing code. Yet adoption has been fast and it’s been driving huge layoffs.

But evidence is coming in:

In a randomised controlled trial – the first of its kind – experienced computer programmers could use AI tools to help them write code. What the trial revealed was a vast amount of self-deception.

 

“The results surprised us,” research lab METR reported. “Developers thought they were 20pc faster with AI tools, but they were actually 19pc slower when they had access to AI than when they didn’t.”

 

In reality, using AI made them less productive: they were wasting more time than they had gained. But what is so interesting is how they swore blind that the opposite was true.

Don’t hold your breath for a white-collar automation revolution either: AI agents fail to complete the job successfully about 65 to 70pc of the time, according to a study by Carnegie Mellon University and Salesforce.

The analyst firm Gartner Group has concluded that “current models do not have the maturity and agency to autonomously achieve complex business goals or follow nuanced instructions over time.” Gartner’s head of AI research Erick Brethenoux says: “AI is not doing its job today and should leave us alone”.

 

It’s no wonder that companies such as Klarna, which laid off staff in 2023 confidently declaring that AI could do their jobs, are hiring humans again.

AI doesn’t work, and doesn’t make a profit (though I’m not entirely sold on the coding study) yet everyone jumped on the bandwagon with both feet. Why? Because employees are always the problem, and everyone wants to get rid of as many of them as possible. In the current system this is, of course, suicide, since if every business moves to AI, customers stop being able to buy, but the goal of smarter members of the elite is to move to a world where that isn’t true, and current elites control the AIs.

Let’s be clear that much like automation, AI isn’t innately “all bad”. Automation instead of just leading to more make work could have led to what early 20th century thinkers expected by this time: people working 20 hours a week and having a much higher standard of living. AI could super-charge that. AI doing all the menial tasks while humans do what they want is almost the definition of one possible actual utopia.

But that’s not what most (not all) of the people who are in charge of creating it want. They want to use it to enhance control, power and profit.

Fortunately, at least so far, it isn’t there and I don’t think this particular style of AI can do what they want. That doesn’t mean it isn’t extremely dangerous: combined with drones, autonomous AI agents, even if rather stupid, are going to be extremely dangerous and cause massive changes to our society.

But even if this round fails to get to “real” AI, the dream remains, and for those driving AI adoption, it’s not a good dream.

(I know some people in the field. Some of them are driven by utopian visions and I salute them. I just doubt the current system, polity and ideology can deliver on those dreams, any more than it did on the utopian dreams of what the internet would do I remember from the 90s.)

If you’ve read this far, and you read a lot of this site’s articles, you might wish to Subscribe or donate. The site has over over 3,500 posts, and the site, and Ian, take money to run.

Fixing Education In The Age of “AI” Is Simple, But Hard

As has been noted, AI is being used to cheat. A lot:

Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.

Clio, the Muse of History

Clio, the Muse of History

He’s stupid. But that’s OK, because he’s young. What studies are showing is that people who use AI too much get stupider.

The study surveyed 666 participants across various demographics to assess the impact of AI tools on critical thinking skills. Key findings included:

  • Cognitive Offloading: Frequent AI users were more likely to offload mental tasks, relying on the technology for problem-solving and decision-making rather than engaging in independent critical thinking.
  • Skill Erosion: Over time, participants who relied heavily on AI tools demonstrated reduced ability to critically evaluate information or develop nuanced conclusions.
  • Generational Gaps: Younger participants exhibited greater dependence on AI tools compared to older groups, raising concerns about the long-term implications for professional expertise and judgment.

The researchers warned that while AI can streamline workflows and enhance productivity, excessive dependence risks creating “knowledge gaps” where users lose the capacity to verify or challenge the outputs generated by these tools.

Meanwhile, AI is hallucinating more and more:

Reasoning models, considered the “newest and most powerful technologies” from the likes of OpenAI, Google and the Chinese start-up DeepSeek, are “generating more errors, not fewer.” The models’ math skills have “notably improved,” but their “handle on facts has gotten shakier.” It is “not entirely clear why.”

If you can’t do the work without AI, you can’t check the AI. You don’t know when it’s hallucinating, and you don’t know when what it’s doing isn’t the best or most appropriate way to do the work. And if you’re totally reliant on AI, well, what do you bring to the table?

Students using AI to cheat are, well, cheating themselves:

It isn’t as if cheating is new. But now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences? After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” he said. “Both in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, much less anyone else’s.” That future may arrive sooner than expected when you consider what a short window college really is. Already, roughly half of all undergrads have never experienced college without easy access to generative AI. “We’re talking about an entire generation of learning perhaps significantly undermined here,” said Green, the Santa Clara tech ethicist. “It’s short-circuiting the learning process, and it’s happening fast.”

This isn’t complicated to fix. Instead of having essays and unsupervised out of class assignments, instructors are going to have to evaluate knowledge and skills by:

  • Oral tests. Ask them questions, one on one, and see if they can answer and how good their answers are.
  • In class, supervised exams and assignments. No AI aid, proctors there to make sure of it, and can you do it without help.

The idea that essays and take-home assignments are the way to evaluate students wasn’t handed down from on high, and hasn’t always been the way students’ knowledge was judged.

Now, of course, this is extra work for instructors and the students will whine, but who cares? Those who graduate from such programs (which will also teach how to use AI, not everything has to be done without it), will be more skilled and capable.

Students are always willing to cheat themselves by cheating and not actually learning the material. This is a new way of cheating, but there are old methods which will stop it cold, IF instructors will do the work, and if they can give up the idea, in particular, that essays and at-home assignments are a good way to evaluate work. (They never were, entirely, there was an entire industry for writing other people’s essays, which I assume AI has pretty much killed.)

AI is here, it requires changes to adapt. That’s all. And unless things change, it isn’t going to replace all workers or any such nonsense: the hallucination problem is serious, and researchers have no idea how to fix it and right now there is no US company which is making money on AI, every single query, even from paying clients, is costing more to run than it returns.

IF AI delivered reliable results and thus really could replace all workers. If it could fully automate knowledge work, well, companies might be willing to pay a lot more for it. But as it stands right now, I don’t see the maximalist position happening. And my sense is that this particular model of AI, a blend of statistical compression and reasoning cannot be made to be reliable, period. A new model is needed.

So, make the students actually do the work, and actually learn, whether they want to or not.

This blog has always been free to read, but it isn’t free to produce. If you’d like to support my writing, I’d appreciate it. You can donate or subscribe by clicking on this link.

 

Postliberalism, Liberal Apogee, Routine Elite Failure and Then?

I was alerted to Nathan Pinkoski’s “Actually Existing Postliberalism,” by N.S. Lyons’ response “The Post-Cold War Apotheosis of Liberal Managerialism,” and enjoyed both tremendously.

Pinkosi’s piece is an excellent short history of the public-private partnership currently aiming for absolute global cultural control via the weaponization of finance that he calls postliberalism.

I thought it would be fun to excerpt all the times Antony Blinken’s name appears in the piece.

First mention:

When Bill Clinton took office, he continued the pursuit of openness. In 1993, he ratified NAFTA and relaxed the ban on homosexuals in the military. However, he made it clear that the old liberalism was not enough. Eager to extend the reach of democracy and confront foreign enemies who stood in its way, his administration developed new tools to advance America’s global power. In September, National Security Advisor Anthony Lake outlined a new paradigm. His speech, “From Containment to Enlargement,” bespeaks a political revolution. It provided the blueprint not only for the foreign policy agenda of nearly every U.S. president since then, but for the convictions of every right-thinking person. Lake’s speechwriter was Anthony (sic) Blinken.

Second mention:

After Biden was sworn in as president, his administration shelved a plan to overhaul sanctions policy. A consensus held that if the kinks of the past could be worked out, then the Americans and Europeans had all the weapons in place to launch a devastating financial first strike against their preferred targets. Planning began in the first year of the new administration, with Secretary Blinken’s State Department taking the lead. So by February 2022, just as the Russian invasion of Ukraine faltered, the arrangements were already in place. The strategic possibilities seemed limitless. Russia could be brought to its knees; Putin would follow in the ignominious footsteps of ­Milosevic and Gaddafi.

The execution of the strike was dazzling. The scale, especially the involvement of SWIFT and the targeting of Russia’s central bank, caught the Kremlin by surprise. It was ­Barbarossa for the twenty-­first century. Yet the first strike did not yield the promised results. Nor did the second, third, or fourth. Putin’s approval ratings soared, Russia’s industrial output increased, and its military continues to grind away at the Ukrainian army. Despite implementing nearly 6,000 sanctions in two-plus years, the euphoria of spring 2022 (let alone that of the holiday parties of 2011) is long gone. Although American policymakers have said again and again that they have mobilized a global coalition against Russia that has left the country isolated, that is not the case. The map of the countries that have imposed sanctions on Russia closely resembles the map of the countries that have legalized same-sex marriage. Economic warfare against Russia has exposed the limits of the global American empire.

Lyons applauds Pinkoski’s essay but rejects the notion that this is a revolution against liberalism — instead, it is its apogee.

Sadly, he doesn’t mention Blinken, but he does elaborate on the frightening ambition of this movement:

The managerial ideal is the perfect frictionless mass of totally liberated (that is, totally deracinated and atomized) individuals, totally contained within the loving arms of the singular unity of the managerial state. To achieve its utopia of perfect liberty and equality, liberalism requires perfect control.

This ideal is, of course, the very essence of totalitarianism. Yet if we wonder why the distinction between public and private has everywhere collapsed into “the fusion of state and society, politics and economics,” this is the most fundamental reason why. Perhaps, for that matter, this is also why the U.S. and EU now habitually sponsor LGBT groups in Hungary or India, and finance human-trafficking “human rights” NGOs in Central America and the Mediterranean: because managerialism’s blind crusade to crush any competing spheres of social power has gone global.

In response, a comforting tonic from The Archdruid, John Michael Greer at Ecosophia, whose reader “Dave” asks him:

I’ve noticed a growing and extremely worrying trend of the “elites” of politics and entertainment pursuing reckless and (to me) clearly wrong courses of actions that blow up in their faces, and then instead of honestly looking at the situation they’ve had a large hand in creating and doing a mea culpa, either doubling down and getting mad at regular people when they’re less keen to do what the elites tell them, or trying something else without ever really honestly accounting for their mistakes. The actions remind me of signs of elite collapse that this blog has talked about for years now and it’s very surreal and worrying to see happening in real time. What is going on and why can’t the “elites”, the people with access to more data and resources and advisers than anyone else, seem to realize what’s going wrong? Do they not care or are their actions part of a larger plan, not to sound conspiratorial?

Greer’s response was just what I needed to hear:

Dave, I don’t think that it’s any kind of plan. Quite the contrary, this is normal elite failure, the thing that comes right before an elite replacement crisis. Just as the capitalist elite of the 1920s crashed and burned, and was replaced by a managerial elite in the 1930s and 1940s, the managerial elite of the 2010s is crashing and burning, and will be replaced by an entrepreneurial elite in the 2020s and 2030s. The entitled cluelessness of a class that has remained in power too long is a familiar thing; comparisons to French aristocrats just before the French Revolution also come to mind.

Although, honestly if this means that Elon Musk and company are going to win what Chris Hedges calls “The Choice Between Corporate and Oligarchic Power”eek!

Kamala Harris, anointed by the richest Democratic Party donors without receiving a single primary vote, is the face of corporate power. Donald Trump is the buffoonish mascot for the oligarchs. This is the split within the ruling class. It is a civil war within capitalism played out on the political stage. The public is little more than a prop in an election where neither party will advance their interests or protect their rights.

And what do the oligarchs want?

Warlord capitalism seeks the total eradication of all impediments to the accumulation of profits including regulations, laws and taxes. It makes its money by charging rent, by erecting toll booths to every service we need to survive and collecting exorbitant fees.

Trump’s cohort of Silicon Valley backers, led by Elon Musk, were what The New York Times writes, “finished with Democrats, regulators, stability, all of it. They were opting instead for the freewheeling, fortune-generating chaos that they knew from the startup world.” They planned to “plant devices in people’s brains, replace national currencies with unregulated digital tokens, [and] replace generals with artificial intelligence systems.”

As much as I eagerly anticipate the long-overdue fall of our current elite, I truly dread what’s coming up in their wake.

AI Dynamic Pricing

Allow me to venture a prediction. If AI dynamic pricing is adopted by corporations, especially grocery store chains, it will cause just-in-time supply-chain chaos and profits to be unpredictable by two or three standard deviations outside the bell curve. All of which will then result in stock and bond market carnage worse than 2008. Moreover, this kind of short-sighted innovation, just the kind Silicon Valley adores, is untested and untrustworthy and will cause a societal meltdown that will make the results of Hurricane Katrina in New Orleans look like Sunday school, possibly ending in a near-famine if adopted. Finally, it’s the kind of intellectual irresponsibility that will propel already Burj Khalifa levels of stupidity out into the cosmos, ultimately ushering in chaos and revolution.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén