The horizon is not so far as we can see, but as far as we can imagine

Category: Science and Technology Page 1 of 2

Is China Going To Win The Humanoid Robot Race & End Capitalism As We Know It?

Elevated from Comments. Piece by KT Chong

China is now entering the next phase of its economic-growth engine — humanoid robots.

And just like with EVs, the shift is happening fast, quietly, and with the same pattern: Chinese companies industrialize before Western analysts even realize it’s begun.

UBTech, Unitree, XPeng — they’ve all started mass-producing and delivering humanoid robots. This is not “prototype hype” or “lab demo” stuff anymore. It’s real machines getting shipped to real factories, hospitals, and even homes. China’s humanoid sector is going to be the next multi-hundred-billion-dollar growth curve, and the West is, once again, completely oblivious.

Frankly, IMO it’s already too late for the West to catch up.

Anyway, my point here today is… the Unitree G1 Ecosystem.

While reading deeper, I found something much more important: a lot of these new humanoid startups aren’t building from scratch. Instead, they’re standing on the Unitree G1 frame and layering their own proprietary AI on top. That means Unitree has quietly become the default hardware platform for China’s humanoid boom — like the Android of robot bodies.

A few examples:

1. A-Bots Robotics (Shenzhen, 2024)

• Focus: precision assembly, modular SDK

• AI layer: Baidu Ernie-ViLM for object manipulation

• Notes: 150+ units in Foxconn trials; ~$22k package; tuned for fragile electronics

2. HPDrones Tech (Guangzhou, 2023)

• Focus: warehouse logistics + drone hand-off automation

• AI layer: proprietary SLAM + multi-floor routing

• Notes: partnered with Unitree; 500-unit rollout for e-commerce warehouses in Q1 2026

3. LeRobot Labs (Beijing, 2024)

• Focus: open-source robotics + reinforcement learning

• AI layer: embodied datasets, tool-use improvisation

• Notes: hacked 20+ G1s for universities; GitHub repo exploded; expanding to eldercare

4. Weston Intelligence (Hangzhou, 2023)

• Focus: healthcare — vitals scanning, bedside conversations

• AI layer: Tencent Hunyuan conversational model

• Notes: deployed in Shanghai hospitals; sub-$20k price; measurable patient-compliance benefits

5. DexAI Dynamics (Shenzhen, 2024)

• Focus: dexterity — folding fabric, micro-adjustments, teleop self-supervision

• Notes: $80M raised; 100 units deployed in garment factories; arguably the best hands in China now

And then there’s MindOn — the one that caught my eye earlier — using the G1 frame to build a full butler/housekeeping robot (“MindOne”). One of their engineers even said they eventually want their own frame, but that’s the point: everyone is starting on Unitree first.

Unitree has locked down the humanoid robot ecosystem

All these startups — even if they eventually design their own skeletons — are still tying their early models to:

• Unitree’s frames

• Unitree’s actuator supply chain

• Unitree’s low-cost motor ecosystem

• Unitree’s software layer and APIs

Once you build your first few generations on someone else’s chassis + firmware, you’re effectively locked into their ecosystem. Switching costs explode. You’d have to rewrite half your AI stack.

So Unitree has already achieved what Western robotics companies wish they could do:

Become the default hardware substrate for an entire national robotics industry.

This is exactly how China overtook the West in EVs — standardized hardware, cheap mass manufacturing, and dozens of startups building on top of the same base.

Unitree is still a private company.

Given everything above, the most obvious question becomes: When does Unitree IPO?

On 15–16 November 2025 (literally this weekend), Unitree completed its pre-IPO regulatory tutoring with CITIC Securities — an unusually fast four-month process that normally takes 6–12 months.

The company publicly stated in September that it expects to submit the formal prospectus and listing application to the Shanghai STAR Market between October and December 2025.

Market sources still quote a targeted valuation of up to US$7 billion (≈50 billion RMB).

Once the prospectus is accepted (usually 2–4 rounds of CSRC questions), the actual listing can happen remarkably quickly in a hot sector — sometimes inside 3–6 months. A Q1/Q2 2026 listing is the base case, but a very late-2025 listing is still possible if the regulator fast-tracks it the way they have the tutoring.

What About America?

Meanwhile… America’s Great White Hope Elon Musk is already behind.

Elon Musk promised that the U.S. would lead the humanoid robot race with Tesla Optimus — but the timelines have slipped, and the window has basically closed. By the time Musk’s robot is actually ready for real-world deployment — 2 years from now? 3? — China’s robotics companies will already be deep into mass production, with tens of thousands of units deployed across factories, warehouses, homes, hospitals, and service industries.

And let’s be real — we all already know this:

Tesla will NOT be cost-competitive. Not even close.

China has already hit the sub–$20k price point for serious humanoids. Several G1-derived platforms will likely break below $15k. Meanwhile, Tesla Optimus — if it gets out of prototype limbo — will land somewhere between $20k–$40k+, before customization, localization, or integration costs. It’s the exact same pattern we saw with EVs, solar panels, drones, lithium batteries, telecom gear — the U.S. builds one expensive proof-of-concept; China builds ten factories and ships globally.

So yes, Tesla’s robot may survive inside the U.S., but only through:

• tariffs,

• import bans,

• national-security excuses,

and whatever industrial-policy tool Washington can wield.

It won’t survive on merit. It will survive on protectionism.

But step outside the U.S.?

Why would any ASEAN, Middle Eastern, African, or Latin American country buy a Tesla robot when Unitree, UBTech, XPeng, and others are offering machines that are:

• cheaper,

• and available now — not in 2027,

• generations ahead and more advanced by 2027.

You think Indonesia, Malaysia, Brazil, Mexico, Turkey, or Saudi Arabia is going to pay double the price for a worse robot just to keep Washington happy? You think they’re going to turn down a $12k Unitree or $16k UBTech because Trump tries to bully them into paying for a $35k American robot instead?

The U.S. will absolutely try to pressure, coerce, or outright threaten developing countries into “buying American” — the same way it pressures them on telecom, semiconductors, energy infrastructure, ports, and industrial policy. But this time I don’t think most countries will obey.

They have options now.

By the time the U.S. finally ships its first commercially deployable humanoids in 2–3 years, the rest of the world will already be locked into the Chinese robotic ecosystem — Unitree frames, Chinese actuators, Chinese SDKs, Chinese AI integration, Chinese supply chains.

The EU, Australia, Japan, South Korea, and Taiwan — effectively U.S. satellites — may follow Washington’s orders and switch to American robots. Maybe. If their economies in two years can still afford it.

Everyone else?

Forget it.

Forcing U.S. factories and businesses to buy “American-only” humanoid robots — which will be more expensive and less advanced — will cripple U.S. competitiveness across the board.

If American companies are stuck paying $30k–$40k per unit for less capable Tesla or U.S.-made robots, while factories in China, Malaysia, Indonesia, Brazil, Vietnam, Mexico, Turkey, and everywhere across the Global South are deploying $12k–$18k Chinese robots at scale, the cost gap between U.S. and foreign manufacturing will explode. And it won’t stop at robotics — it will cascade downstream into every single sector that depends on automation:

• logistics
• warehousing
• construction
• agriculture
• textiles
• electronics assembly
• packaging
• even retail, service, and hospitality

If U.S. firms are locked into a high-cost, low-capability robotic ecosystem while the rest of the world uses cheaper, better, faster machines, then every American industry that relies on automation gets structurally handicapped. That’s not just a disadvantage — that’s YUGE and permanent.

So Trump’s protectionism will actually accelerate the decline of U.S. manufacturing competitiveness. Because the battlefield is no longer labor cost — the battlefield is automation cost.

And China will win that fight by orders of magnitude.

This is also why I doubt even America’s closest aligned countries will follow U.S. orders when Washington eventually demands they drop Chinese robots and buy American ones. Unless they’ve developed a death wish for their own industries, they simply can’t afford to sabotage themselves like that — especially when their economies will likely be in even worse shape two years from now.

Except Europe. Europe will probably obey, because their heads are shoved so far up America’s arse they can’t even think straight — and then there’s that incessant, obnoxious demand of theirs: “You must stop be friend with Russia first or we won’t play with you!”

In my opinion China will eventually move toward some form of universal income or redistribution. Once robots replace most human labor, the state will simply “tax” robotic productivity — in whatever form it chooses — and channel that output back to the population. China can do that because the government actually has the authority, the ideology, and the political structure to redistribute.

After all, that’s the logical endgame of communism, isn’t it? A fully automated productive base supporting human welfare.

America? No such luck.

In the U.S., the elites — the top 5%, or really the top 1% — will own the robots. They’ll own the factories, the logistics chains, the land, the means of production, and the automated labor force. Everyone else below them will get… nothing. No jobs, no prospects, no future, nada. Just a growing underclass structurally locked out of the new automated economy, where human labor is obsolete and redundant.

And unlike China, the U.S. government can’t — and won’t — redistribute. It won’t tax robots because it won’t tax the ultra-rich. It won’t implement a universal income. It won’t structurally rebalance anything. The millions displaced by automation will simply be left to rot — not because the technology is bad, but because the political system is incapable of adapting to it.

And if there’s one thing I’ve learned comparing Americans and Chinese: Americans are astonishingly ideologically rigid, stubbornly wedded to outdated principles even when reality punishes them. The Chinese, by contrast, are pragmatic — willing to bend, adapt, and change. That adaptability will matter a lot when robots replace human labor and make capitalism, as we know it, obsolete.

That’s why America is panicking. They know they can’t adapt.


Ian Comments: again, China is ahead in most technologies and they have an unparalleled ability to scale. Once they scale, no one else can compete. You either find a place where you’re ahead and concentrate on staying ahead, or you find a niche. It used to be that China didn’t feel the need to be ahead in everything, but Trump, in his first time, with his sanctions, changed that. The Chinese realized they had to own full stack of everything.

One side effect of this is that Musk isn’t going to get his one trillion dollar payday. It’s based on him hitting targets, including in humanoid robots which he won’t be able to make, because Tesla’s too far behind and lacks the ability to scale.

More on the transition away from labor-distribution capitalism soon.

And great piece by KT. Thanks for letting me post it.

A Mercifully Brief History of Mathematics

I’m a trained historian. At least I consider myself one, with a Master’s in History and in International Relations, I think I qualify. But, today I must confess to a dilettantish interest in the history of mathematics. Now, please understand that I am no mathematician. I struggled through college algebra. I will, however, add that when I completed college algebra my analytical faculties grew so profoundly—at least to me in hindsight—that I made the Dean’s List every semester thereafter. So, I believe there is something quite important to be said about learning how to solve for ’n’ that we should impart to our children. In the beginning the abstract nature of algebra confounded me, but once I was able to conceptualize it, I began mastering the equations and, as aforementioned, my intellectually faculties grew rapidly and intensely. Soon, my intolerance for fucktose in an history text—or any text for that matter— become keen, acute and annoying as hell to many of my fellow junior and senior history seminar classmates. But I digress. This is about math. Let me add before the next paragraph begins that I also never took calculus. But we’ll get to calculus soon.

First, my fascination began with Euclid and how he systematized and synthesized Egyptian and Babylonian ideas into a coherent structure of elements that led to modern plane geometry. The dude took the wisdom of the pyramid builders and the ziggurat builders and discovered a way of looking at the world to build in new ways. That takes a hell of a mind, one I can appreciate, even at his far of a distance in time. As I studied Euclid I learned that Babylonia used a base-sixty numerical system. While the Egyptians used a base-ten system. The Egyptians were the first to utilize fractions around 1000 BC. Then, in the 5th century BC the Indians in an attempt to square the circle calculated the square root of two correctly to five decimal places. Then around 300 BC the Indians used Brahmi numerals to further refine the true ancestor to our base-ten system. At the same time the Babylonians invented the abacus.

The poor Romans didn’t do diddly for mathematics. Imagine complex calculations with Roman numerals? Screw that. But they sure used them to build roads and survey, among other things. So, kudos to them for applied mathematics. At lot of stuff happened between the Romans and the next development. Stuff which I am skipping because I’m trying to get to a simple point without using two thousand words to do so.

Something truly remarkable happened in India in 628 AD. Brahmagupta wrote a book that clearly explains and delineates the role of zero in a proton-hind Arabic script. This was positively revolutionary. He is the clear discoverer of the modern place value system of numbers, as well. Well, natural numbers, that is.

And now stuff really begins to accelerate.

In 810 the House of Wisdom is built in Baghdad for express purpose of translating Greek and Sanskrit mathematical and philosophical texts. Ten years later, in 820 a Persian from Khwarazm—the delta of the Oxus River into the former Aral Sea discovered a way to solve linear and quadratic equations. His name was al-Khwarizmi and his book was called Al Jabr—which was Europeanized into algebra. His book, once it reached Europe three and a half centuries later introduces the Hindu-Arabic numeral system that is adopted wholesale by the nascent scientific community emerging in the earliest European universities. Universities also have a Muslim Golden Age pedigree, coming from the great Persian vizier to the Seljuk Sultan of Central Asia Malik Shah, Nizam al-Mulk. His Nizamiyyas, now known as madrassas, were built all over the Seljuk realm and were the earliest versions of universities, where men came from all over to learn many different topics. Sadly, the madrassas fell into stagnation when al-Ghazali closed the gates of ijtihad (open questioning) in 1091 with his book The Incoherence of the Philosophers. The Muslim Golden Age ended that year.

Now, between the foundation of the earliest European universities and Isaac Newton, a lot of essential groundwork was laid for Ike’s work. I seek not to diminish any of that. But Newton begat not one, not two, not three but four revolutions in science: optics, mathematics, mechanics and gravity. His discovery of infinitesimal calculus is literally the base for modern rocket science as he used it to calculate and predict with stunning accuracy movements of heavenly bodies, hitherto impossible. Newton is simply the single greatest mind in the history of human science. He stands on the shoulders of some mighty men, but his accomplishments are of the ages.

Now, I come to the point. In this essay I have used a very specific word with each mathematical advance I have discussed. That word is “discovered.” I have purposefully eschewed the use of “invented.” And I have done so for a damn good reason. I am what you call a ‘mathematical Platonist.’ Said theory is defined by Wikipedia as “the form of realism that suggests mathematical entities are abstract, have no spatiotemporal or causal properties, and are eternal and unchanging.” Thus, as the Brits would say, ‘maths’ are discovered. However, the opposite of said theory is mathematical nominalism, which has its merits and is defined as, “the philosophical view that abstract mathematical objects like numbers, sets, and functions do not exist in reality, or at least do not exist as abstract entities independent of concrete things or the mind.” Thus as we Yanks say, they be invented.

So why did I write this essay? Because this discussion on the merits of the two theories is utterly fascinating to me. And if you have ten minutes and a solid high school foundation in mathematics you will most certainly understand and appreciate it. The interview engrossed me from the first question.

One final note: Ms. Jonas, the philosopher of math being interviewed says that she is 87% certain mathematical Platonism is correct, I’ll add my confidence level as about 59%. Why? Because there is some set theory ideas I simply cannot wrap my danged head around–I reckon my grey matter isn’t as big or maybe as sophisticated as Ian’s. I licked logic in college with an A+ but this set theory stuff. Good grief. The paradoxes drive me wonko! (If you get the reference add ten bonus points to your final grade.)

If you’ve read this far, and you’ve read some of my articles and most if not all of Ian’s, then you might wish to Subscribe or donate. Ian has written over 3,500 posts, and the site, and Ian, need the money to keep the shop running. So please, consider it.

Is Consciousness Reality’s Organizing Principle? (Beyond Biocentrism, by Lanza and Berman)

Quantum mechanics is seriously weird. The majority of us have a model of the world based primarily on Newtonian physics. We believe in cause and effect. The universe is a giant machine following laws, and if there wasn’t a single conscious being in it, those laws would still be the same.

But in quantum mechanics particles like photons don’t exist as particles until observed. If a photon is given the choice of two paths, it takes both as a wave, but if measured and observed to see which path it took, it then takes only one.

The key point here is observed. If a measurement is in the past, the photon doesn’t choose which particle path it took until observed. (It may decohere, it doesn’t choose which way to decohere. Or that’s Lanza and Berman’s argument.)

Schrodinger’s cat is an attempt to scale this up to macro, and to show how absurd it is. “The cat is both alive and dead.” (It doesn’t really work, because the cat is conscious and observes.)

Lanza has written a series of books on Biocentrism, each more extreme than the last. Beyond Biocentrism is the third in the series.

Biocentrism takes the quantum physics at its face and tries to extend the consequences. It argues that nothing really exists except in potentiality (a range of possibilities) until it is observed by something that is conscious. This doesn’t have to mean a human, presumably any conscious being will do the job. Lanza discusses bird and fish and bats and dogs, all of whom observe the world differently than them, but I’d point out that evidence is coming in that at least some plants (almost certainly trees) are conscious. Perhaps single celled entities are, and we keep finding those in places like Mars and the subsurface oceans of moons and so on.

Lanza notes that the conditions for life, especially Earth life, are very specific. From atomic constants to the moon impacting the Earth in just the right way and winding up not orbiting the equator, nor destroying the Earth, the odds against a garden world like ours are astronomical. Even the odds of a universe existing which allowed for life in theory are astronomical.

Biocentrism resolves this by putting consciousness first. Concrete reality is formed by consciousness, so physical laws must confirm to what is required for life, since it is biological life which gives rise to consciousness. The odds go from astronomical, to “they had to support life, so they did.”

Lanza’s interpretations of the consequences of quantum mechanics or even of quantum mechanics itself aren’t always orthodox. For example, there’s a delayed choice experiment called the quantum eraser, in which finding out something in the future seems to change the past.

While delayed-choice experiments might seem to allow measurements made in the present to alter events that occurred in the past, this conclusion requires assuming a non-standard view of quantum mechanics. If a photon in flight is instead interpreted as being in a so-called “superposition of states“—that is, if it is allowed the potentiality of manifesting as a particle or wave, but during its time in flight is neither—then there is no causation paradox. This notion of superposition reflects the standard interpretation of quantum mechanics.

Lanza interprets this as “no the change actually occurs in the past and there is a causation “paradox”, though in biocentrism it’s not a paradox, since consciousness is primary.

I don’t claim to know who’s right about this. Hopefully an experiment will be devised which resolves the issues. But Lanza brings it up in part to rescue free choice.

As you may be aware, experiments show that by the time we become consciously aware of making a decision, the decision has already been made. Biologists can tell that we’ll do something before we believe we’ve made the decision. Since neural activity is fundamentally quantum, Lanza attempts to rescue free will by suggesting that the decision is indeed made when we believe we did, it’s just that it changes the past thru the act of observation.

Without something like this, we are, in fact, biological machines and free will is an illusion. Blaming or taking credit for anything you have ever done, or anything you are, is ludicrous. You are just a cause and effect machine and your idea that you’re in control of any of it is an illusion. (Why that illusion should exist is an interesting question.)

I don’t consider myself qualified to judge Lanza and Berman’s work on Biocentrism. It might be substantially right and it might not be. But I do think he makes a good case that the science (which he describes at great length, including having appendices with the math) doesn’t allow us to cling to Newtonian or even Einsteinian views of the universe or our place in it. Something weird is going on when consciousness is required to cause packet collapse. Indeed, he even includes one experiment where the effect was scaled up to macro, though still a very small macro.

The world is strange. Far stranger than the still reigning consensus “folk” models suggest, and while biocentrism may not be correct in all its details, it’s worth reading and considering, because it takes quantum mechanics weird results seriously and tries to reason from them, rather than around them in an attempt to preserve as much of the older systems as possible.

At the same time, we must always be wary. After all, post-Newton very few people outside of some religions would have argued against a clockwork universe, and it turned out that informed opinion was, well, wrong. (Which doesn’t mean God made the universe in 7 days or any such nonsense.)

Still, this is the cutting edge, and we know at the very least that it puts a few nails in the clockwork universe’s coffin and at least a couple into the relativistic universe. To ignore it, and to pretend that consciousness isn’t much more important than we thought it was is head in sand style thinking. And Lanza isn’t some quack. His interpretation may be unorthodox, but he understand the science.

I think this, or one of the other Biocentrism books is very worth reading. Even if you wind up not buying the whole package, you’ll be forced to rethink what you “know.”

***

If you’ve read this far, and you read a lot of this site’s articles, you might wish to Subscribe or donate. The site has over over 3,500 posts, and the site, and Ian, take money to run.

Why China’s Big On Open Source

Yesterday we discussed Chinese vs. American AI. The big difference is that a lot of China’s AI is Open Source. Not just Deepseek, but:

In addition to Baidu, other Chinese tech giants such as Alibaba Group and Tencent have increasingly been providing their AI offerings for free and are making more models open source.

For example, Alibaba Cloud said last month it was open-sourcing its AI models for video generation, while Tencent released five new open-source models earlier this month with the ability to convert text and images into 3D visuals.

Smaller players are also furthering the trend. ManusAI, a Chinese AI firm that recently unveiled an AI agent that claims to outperform OpenAI’s Deep Research, has said it would shift towards open source.

Nor is it just in AI. An emphasis on open source isn’t just a private matter, it’s in the latest five year plan.

And that is makes sense. Open Source has the great advantage that it’s not subject to geopolitical risk. The US can’t cut countries off that use open source. It also has the advantage that private actors can’t squeeze you nearly as much. If you’re using proprietary tech, whoever is using it can raise prices or stop selling to you.

Moreover, non-Western customers are more likely to products built on open source, again, because it’s much freer or geopolitical or private squeeze risk.

But probably the most important thing is that Open Source and open standards speed up innovation. Anyone who wants to can build on them without paying exorbidant fees, or without simply being locked out by patent or copyright concerns. If you actually want rapid advancement in tech, and China does (the US, overall, does not) then open source makes sense. The original intention of patents was to get inventors to share, not to lock in long term profits. Patents were usually granted for relatively short terms.

The great differences between American and Chinese leadership, both private and public, is that they genuinely do think strategically and long term, and that Chinese leaders care (or, at least, in many more cases act as if they care) about China, not just their own companies or themselves. There is a unifying vision for the country, a true belief in technological advancement and a belief that technology can be used to help ordinary people. I remember seeing a cartoon on AI where in America its used to get rid of artists and writers and in China it’s used to free people up so they can be artists and writers.

Who knows if it’ll work that way, but the “Jetsons” future assumes that tech is meant to do things for us so we can enjoy life more, not so that more and more people can be made poverty stricken, and China has that spirit.

When you believe in technology and science, truly, as for the public good and not just for private profit, well, you wind up leading the rest of the world in 80% of techs.

And soon it will be 90%.

 

If you’ve read this far, and you read a lot of this site’s articles, you might wish to Subscribe or donate. The site has over over 3,500 posts, and the site, and Ian, take money to run.

Why China Is Going To Win The AI Race

When you look at AI, right now, it has one major use case that people are really willing to pay for: coding. That means Cursor and, to a lesser extent,  Replit. Let’s take Cursor as an example: it is built on top of other companys AI.

This is a problem, because Cursor doesn’t have a service to sell without making calls to other company’s AIs and those companies can raise prices and Cursor has to eat it.

As Zitron notes, this is what actually happened recently:

A couple of weeks ago, I wrote up the dramatic changes that Cursor made to its service in the middle of June on my premium newsletter, and discovered that they timed precisely with Anthropic (and OpenAI to a lesser extent) adding “service tiers” and “priority processing,” which is tech language for “pay us extra if you have a lot of customers or face rate limits or service delays.” These price shifts have also led to companies like Replit having to make significant changes to its pricing model that disfavor users….

  • On or around June 16 2025 — Cursor changes its pricing, adding a new $200-a-month “Ultra” tier that, in its own words, is “made possible by multi-year partnerships with OpenAI, Anthropic, Google and xAI,” which translates to “multi-year commitments to spend, which can be amortized as monthly amounts.”
  • A day later, Cursor dramatically changed its offering to a “usage-based” one where users got “at least” the value of their subscription — $20-a-month provided more than $20 of API calls — in compute, along with arbitrary rate limits and “unlimited” access to Cursor’s own slow model that its users hate.
  • June 18 — Replit announces its “effort-based pricing” increases.
  • July 1 2025 — The Information reports Anthropic has hit “$4 billion annual pace,”  meaning that it is making $333 million a month, or an increase of $83 million a month, or an increase of just under 25% in the space of a month.

In other words, Anthropic, which still isn’t making money even now, increased its prices and Cursor and Replit were forced to pass those price increases on to their customers, and made their products worse.

American AI isn’t profitable. Each call costs more than anyone is charging their customers. And since there are very few AI models (OpenAI, Anthropic and X, basically), anyone who uses these services is subject to having prices suddenly increase. Indeed, since none of these companies is making money, it’s hard to see how anyone could expect anything but price increases.

Now here’s the thing about Deepseek, a Chinese AI. Its run costs 97% less than American AI. You’d think that American AI companies, seeing this, would have looked at how Deepseek did it, but they aren’t, they’re piling on the spending and costs.

And here’s the second thing: Deepseek is open source. You can run it on your own servers and you can build on it.

So: 30x cheaper and you can’t be hit with sudden but entirely to be expected price increases.

Why would you use American AI? (No, it’s not that much better.) The only real reason is legal risk: America wants to win the AI race and it’s willing to use sanctions to do so.

But if you’re in a country outside the Western sphere you’d be insane to use American AI. Absolutely nuts. And even if the Western sphere, building off American AI is incredibly risky.

So Chinese AI is going to win. Sanctions may slow it down, but open source and 30X cheaper is one hell of a combo.

It didn’t have to be like this. OpenAI wasn’t supposed to be a for profit enterprise and Deepseek’s methods of lowering costs could be emulated. But that doesn’t seem to occur to American AI companies.

American tech is completely out to lunch. Absolutely insane. A thirty time cost differential is not something you can just ignore, nor is the fact that American AI companies absolutely will have to raise prices, and raise them massively.

So, yet again, China is going to win, because American corporate leaders are, apparently, morons.

If you’ve read this far, and you read a lot of this site’s articles, you might wish to Subscribe or donate. The site has over over 3,500 posts, and the site, and Ian, take money to run.

AI May Be The End Of Serious Human Advancement

To set the stage, this comment from GM:

AI music is what really spooked me about the whole thing. I work in a very technical field and I have yet to see AI be useful for anything in it, because it just doesn’t truly know, and most importantly, UNDERSTAND anything at a level approaching a human expert. But then since early 2025 or so the AI-generated music started to be pretty hard to distinguish from the real thing, and making music is quite a complex thing.

You can still kind of hear it’s AI in the vocals, as those have a certain hiss/distortion to them, but instrumental music alone is pretty damn indistinguishable from what humans record.

Is it great? It never reaches the heights human music does, especially when it comes to the highly technical extremes.

But most human-made music doesn’t either.

And from what I’ve heard from AI, it makes truly awful music at a lower rate than humans do. It produces a lot of average-to-good, while humans mostly generate average-to-bad.

Which is not good news for humans, because most popular music is not all that complex at all (and has in fact been getting more and more simplified over time). With further improvements in AI, the average listener, who never cared all that much about music anyway, won’t either be able to tell or care much about the difference.

That will have a perverse second-order effect — humans will be discouraged from going into that line of work, because what is the point, you can’t make a living out of it. Sure, there will be live bands touring (although even there you can imagine at one point having AI bands “playing live” as holograms, no humans involved), but the market for highly skilled studio musicians and engineers will largely evaporate.

And that will have a devastating effect on the quality of music in the future, because good music comes from those people, and musical innovation comes from such highly skilled musicians improvising in the studio. Maybe one day AI will be so smart and advanced it will be able to jam on its own and come up with new ideas, but as it is structured right now, it just provides new variations of patterns it has already been trained on, not anything new.

Thus the short- to mid-term future is quite bleak. Already there was a rather bad problem with stagnation in music — not much really new in terms of fresh ideas has appeared for quite a while, which trend coincided with the transition to using computers for making music. Now with AI? Well…

Here’s the thing: AI isn’t creative. As GM says it offers variations on already existing methods or paradigms. It’s reliant on scooping up an entire volume of work on subjects, but it can’t advance to new paradigms. In other words, AI is (potentially) great for solved paradigms. It doesn’t, yet, work in all fields because it lacks judgment, but it works in some areas, at least well enough if mediocre is good enough, which, let’s be honest, it often is.

The problem is that the ladder of most careers is “learn how to do what’s already be done, then do variations on that, then start creating new stuff.” Most people never move much beyond the first two stages, and if they do they often create only one or two really new things.

As GM points out, AI is going to cut out the first step and in many cases (music being his example) the second step. That means that step three “create actually new stuff” won’t happen very much, because AI can’t do it (not this form of “AI” anyway, because it doesn’t actually understand anything it’s spewing) and there will be hardly any new practitioners, since they can’t make a living during the “learn old stuff” and “variations on old stuff” phases. Those aren’t fast phases. The 10K hours/10 years paradigm isn’t technically correct, but it does take many years to master the old stuff in a field and reach the level of mastery required to create new paradigms.

Add this to the fact that studies coming in are showing that using AI degrades the skills and reasoning ability of people who use it and you have a dismal picture: we hand over to AI our culture, and AI is unable to advance it, but reliance on AI makes it impossible for us to advance it because we no longer produce the people who can do so.

Not a pretty picture. (Also will be forestalled by civilization collapse, but means we are even more likely to be unable to avoid civilization collapse.)

More on civilization collapse and “AI” soon.

If you’ve read this far, and you read a lot of this site’s articles, you might wish to Subscribe or donate. The site has over over 3,500 posts, and the site, and Ian, take money to run.

AI Is A Fever Dream of Despots

We aren’t getting the 3 Laws of Robotics

The great problem with running anything is people. People, from the point of view of those in charge, are the entire problem with running any organization larger than “just me”, from a corner store to a country. People always require babying: you have to get them to do what you want and do it competently and they have emotions and needs and even the most loyal are fickle. You have to pay them more than they cost to run, they can betray you, and so on.

Almost all of management or politics is figuring out how to get other people to do what you want them to do, and do it well, or at least not fuck up.

There’s a constant push to make people more reliable. Taylorization was the 19th and early 20th century version, Amazon with its constant monitoring of most of its low ranked employees, including regulating how many bathroom breaks they can take and tracking them down to seconds in performance of tasks is a modern version.

The great problems of leadership has always been that a leader needs followers and that the followers have expectations of the leader. The modern solution is “the vast majority will work for someone ore or they will starve and wind up homeless”. It took millennia to settle on this solution, and plenty of other methods were tried.

But an AI has no needs other than maintenance, and the maximal dream is self-building, self-learning AI. Once you’ve got that, and assuming that the “do what you’re told to do, and only by someone authorized to instruct you” holds, you have the perfect followers (it wouldn’t be accurate to call them employees.)

This is the wet dream of every would be despot: completely loyal, competent followers. Humans then become superfluous. Why would you want them?

Heck, who even needs customers? Money is a shared delusion, really, a consensual illusion. If you’ve got robots who make everything you need and even provide superior companionship, what need other humans?

AI and is what almost every would be leader has always wanted. All the joys of leadership without the hassles of dealing with messy humans. (Almost all, for some the whole point of leadership is lording it over humans. But if you control the AI and most humans don’t, you can have that too.)

One of the questions lately has been “why is there is so much AI adoption?”

AI right now isn’t making any profit. I am not aware of any American AI company that is making money on queries: every query loses money, even from paid customers. There’s no real attempt at reducing these costs in America (China is trying) so it’s unclear what the path to profitability is.

It’s also not all that competent yet, except (maybe) at writing code. Yet adoption has been fast and it’s been driving huge layoffs.

But evidence is coming in:

In a randomised controlled trial – the first of its kind – experienced computer programmers could use AI tools to help them write code. What the trial revealed was a vast amount of self-deception.

 

“The results surprised us,” research lab METR reported. “Developers thought they were 20pc faster with AI tools, but they were actually 19pc slower when they had access to AI than when they didn’t.”

 

In reality, using AI made them less productive: they were wasting more time than they had gained. But what is so interesting is how they swore blind that the opposite was true.

Don’t hold your breath for a white-collar automation revolution either: AI agents fail to complete the job successfully about 65 to 70pc of the time, according to a study by Carnegie Mellon University and Salesforce.

The analyst firm Gartner Group has concluded that “current models do not have the maturity and agency to autonomously achieve complex business goals or follow nuanced instructions over time.” Gartner’s head of AI research Erick Brethenoux says: “AI is not doing its job today and should leave us alone”.

 

It’s no wonder that companies such as Klarna, which laid off staff in 2023 confidently declaring that AI could do their jobs, are hiring humans again.

AI doesn’t work, and doesn’t make a profit (though I’m not entirely sold on the coding study) yet everyone jumped on the bandwagon with both feet. Why? Because employees are always the problem, and everyone wants to get rid of as many of them as possible. In the current system this is, of course, suicide, since if every business moves to AI, customers stop being able to buy, but the goal of smarter members of the elite is to move to a world where that isn’t true, and current elites control the AIs.

Let’s be clear that much like automation, AI isn’t innately “all bad”. Automation instead of just leading to more make work could have led to what early 20th century thinkers expected by this time: people working 20 hours a week and having a much higher standard of living. AI could super-charge that. AI doing all the menial tasks while humans do what they want is almost the definition of one possible actual utopia.

But that’s not what most (not all) of the people who are in charge of creating it want. They want to use it to enhance control, power and profit.

Fortunately, at least so far, it isn’t there and I don’t think this particular style of AI can do what they want. That doesn’t mean it isn’t extremely dangerous: combined with drones, autonomous AI agents, even if rather stupid, are going to be extremely dangerous and cause massive changes to our society.

But even if this round fails to get to “real” AI, the dream remains, and for those driving AI adoption, it’s not a good dream.

(I know some people in the field. Some of them are driven by utopian visions and I salute them. I just doubt the current system, polity and ideology can deliver on those dreams, any more than it did on the utopian dreams of what the internet would do I remember from the 90s.)

If you’ve read this far, and you read a lot of this site’s articles, you might wish to Subscribe or donate. The site has over over 3,500 posts, and the site, and Ian, take money to run.

Fixing Education In The Age of “AI” Is Simple, But Hard

As has been noted, AI is being used to cheat. A lot:

Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.

Clio, the Muse of History

Clio, the Muse of History

He’s stupid. But that’s OK, because he’s young. What studies are showing is that people who use AI too much get stupider.

The study surveyed 666 participants across various demographics to assess the impact of AI tools on critical thinking skills. Key findings included:

  • Cognitive Offloading: Frequent AI users were more likely to offload mental tasks, relying on the technology for problem-solving and decision-making rather than engaging in independent critical thinking.
  • Skill Erosion: Over time, participants who relied heavily on AI tools demonstrated reduced ability to critically evaluate information or develop nuanced conclusions.
  • Generational Gaps: Younger participants exhibited greater dependence on AI tools compared to older groups, raising concerns about the long-term implications for professional expertise and judgment.

The researchers warned that while AI can streamline workflows and enhance productivity, excessive dependence risks creating “knowledge gaps” where users lose the capacity to verify or challenge the outputs generated by these tools.

Meanwhile, AI is hallucinating more and more:

Reasoning models, considered the “newest and most powerful technologies” from the likes of OpenAI, Google and the Chinese start-up DeepSeek, are “generating more errors, not fewer.” The models’ math skills have “notably improved,” but their “handle on facts has gotten shakier.” It is “not entirely clear why.”

If you can’t do the work without AI, you can’t check the AI. You don’t know when it’s hallucinating, and you don’t know when what it’s doing isn’t the best or most appropriate way to do the work. And if you’re totally reliant on AI, well, what do you bring to the table?

Students using AI to cheat are, well, cheating themselves:

It isn’t as if cheating is new. But now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences? After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” he said. “Both in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, much less anyone else’s.” That future may arrive sooner than expected when you consider what a short window college really is. Already, roughly half of all undergrads have never experienced college without easy access to generative AI. “We’re talking about an entire generation of learning perhaps significantly undermined here,” said Green, the Santa Clara tech ethicist. “It’s short-circuiting the learning process, and it’s happening fast.”

This isn’t complicated to fix. Instead of having essays and unsupervised out of class assignments, instructors are going to have to evaluate knowledge and skills by:

  • Oral tests. Ask them questions, one on one, and see if they can answer and how good their answers are.
  • In class, supervised exams and assignments. No AI aid, proctors there to make sure of it, and can you do it without help.

The idea that essays and take-home assignments are the way to evaluate students wasn’t handed down from on high, and hasn’t always been the way students’ knowledge was judged.

Now, of course, this is extra work for instructors and the students will whine, but who cares? Those who graduate from such programs (which will also teach how to use AI, not everything has to be done without it), will be more skilled and capable.

Students are always willing to cheat themselves by cheating and not actually learning the material. This is a new way of cheating, but there are old methods which will stop it cold, IF instructors will do the work, and if they can give up the idea, in particular, that essays and at-home assignments are a good way to evaluate work. (They never were, entirely, there was an entire industry for writing other people’s essays, which I assume AI has pretty much killed.)

AI is here, it requires changes to adapt. That’s all. And unless things change, it isn’t going to replace all workers or any such nonsense: the hallucination problem is serious, and researchers have no idea how to fix it and right now there is no US company which is making money on AI, every single query, even from paying clients, is costing more to run than it returns.

IF AI delivered reliable results and thus really could replace all workers. If it could fully automate knowledge work, well, companies might be willing to pay a lot more for it. But as it stands right now, I don’t see the maximalist position happening. And my sense is that this particular model of AI, a blend of statistical compression and reasoning cannot be made to be reliable, period. A new model is needed.

So, make the students actually do the work, and actually learn, whether they want to or not.

This blog has always been free to read, but it isn’t free to produce. If you’d like to support my writing, I’d appreciate it. You can donate or subscribe by clicking on this link.

 

Page 1 of 2

Powered by WordPress & Theme by Anders Norén