When you look at AI, right now, it has one major use case that people are really willing to pay for: coding. That means Cursor and, to a lesser extent, Replit. Let’s take Cursor as an example: it is built on top of other companys AI.
This is a problem, because Cursor doesn’t have a service to sell without making calls to other company’s AIs and those companies can raise prices and Cursor has to eat it.
As Zitron notes, this is what actually happened recently:
A couple of weeks ago, I wrote up the dramatic changes that Cursor made to its service in the middle of June on my premium newsletter, and discovered that they timed precisely with Anthropic (and OpenAI to a lesser extent) adding “service tiers” and “priority processing,” which is tech language for “pay us extra if you have a lot of customers or face rate limits or service delays.” These price shifts have also led to companies like Replit having to make significant changes to its pricing model that disfavor users….
…
- On or around June 16 2025 — Cursor changes its pricing, adding a new $200-a-month “Ultra” tier that, in its own words, is “made possible by multi-year partnerships with OpenAI, Anthropic, Google and xAI,” which translates to “multi-year commitments to spend, which can be amortized as monthly amounts.”
- A day later, Cursor dramatically changed its offering to a “usage-based” one where users got “at least” the value of their subscription — $20-a-month provided more than $20 of API calls — in compute, along with arbitrary rate limits and “unlimited” access to Cursor’s own slow model that its users hate.
- June 18 — Replit announces its “effort-based pricing” increases.
- July 1 2025 — The Information reports Anthropic has hit “$4 billion annual pace,” meaning that it is making $333 million a month, or an increase of $83 million a month, or an increase of just under 25% in the space of a month.
In other words, Anthropic, which still isn’t making money even now, increased its prices and Cursor and Replit were forced to pass those price increases on to their customers, and made their products worse.
American AI isn’t profitable. Each call costs more than anyone is charging their customers. And since there are very few AI models (OpenAI, Anthropic and X, basically), anyone who uses these services is subject to having prices suddenly increase. Indeed, since none of these companies is making money, it’s hard to see how anyone could expect anything but price increases.
Now here’s the thing about Deepseek, a Chinese AI. Its run costs 97% less than American AI. You’d think that American AI companies, seeing this, would have looked at how Deepseek did it, but they aren’t, they’re piling on the spending and costs.
And here’s the second thing: Deepseek is open source. You can run it on your own servers and you can build on it.
So: 30x cheaper and you can’t be hit with sudden but entirely to be expected price increases.
Why would you use American AI? (No, it’s not that much better.) The only real reason is legal risk: America wants to win the AI race and it’s willing to use sanctions to do so.
But if you’re in a country outside the Western sphere you’d be insane to use American AI. Absolutely nuts. And even if the Western sphere, building off American AI is incredibly risky.
So Chinese AI is going to win. Sanctions may slow it down, but open source and 30X cheaper is one hell of a combo.
It didn’t have to be like this. OpenAI wasn’t supposed to be a for profit enterprise and Deepseek’s methods of lowering costs could be emulated. But that doesn’t seem to occur to American AI companies.
American tech is completely out to lunch. Absolutely insane. A thirty time cost differential is not something you can just ignore, nor is the fact that American AI companies absolutely will have to raise prices, and raise them massively.
So, yet again, China is going to win, because American corporate leaders are, apparently, morons.
If you’ve read this far, and you read a lot of this site’s articles, you might wish to Subscribe or donate. The site has over over 3,500 posts, and the site, and Ian, take money to run.
david lamy
We citizens of the United States get steeped in the paradigm of American Exceptionalism. AI is no different than our misguided beliefs in our military superiority or superior standard of living.
david lamy
In addition to my previous short comment: if an AI did not hallucinate, steal, and charge what it judges a luser (old time sarcasm for user) can bare it would not be American (of US variety, not Canadian or Mexican).
Wackadoodledoo
Well, except for that study that said programmers using AI are 20% slower than those not using it, so how long are they going to be willing to pay anything at all for it?
elkern
Wall Street is wetting its pinstripes in anticipation of the layoffs they expect AI to enable. It would be rather funny except for all the misery and deaths it will cause.
Misery, mostly for the people who get laid off, their families, and their communities (= all of us).
Death, because GOOD programming is as much about understanding the Real World as it is about coding, and AI doesn’t do ‘understanding’.
I was a professional programmer for the last few decades of my working life (laid off at age 65 when Covid crashed the Aerospace Industry). I was a decent coder, but never the fastest. My value to the company – whether it/they knew it or not – was that I understood the business (and the people), and I could help users get what they really needed to do their jobs (which was rarely what they first asked for).
Smart, young, greedy Zips (see Firesign Theatre for definition) with MBAs will ‘know’ that AI allows them to write their own programs to get whatever they want from whatever database they use. The computers will gladly spew out whatever pile of bits they are asked for, and the MBAs will use the results for Important Company Decisions.
Often, that will work just fine. But sometimes, it won’t, and those are the dangerous cases.
Most meatbags think that GIGO means “Garbage In, Garbage Out”, but us Geeks know it’s really “Garbage In, Gospel Out”.
NR
I would not be at all surprised if AI was terrible at coding. It’s terrible at basically everything else people use it for, hallucinating and making mistakes all the time. We had an example of this in the last open thread here, where a commenter used AI to write their comment and it said something that was flat-out wrong and contradicted by its own sources. Or, for a higher-profile example, look at RFK Jr.’s recent “Make America Healthy Again” report, which was written by AI and cited sources that didn’t exist.
The attraction of AI (for some people) is that it’s quick and easy, but it comes at the cost of quality and accuracy. I guess if you don’t care about those things, it’s fine, but coding is a place where you can’t exactly let that stuff slide.
Anyway, the DeepSeek analysis I saw several months ago said that Open AI is a little bit better at a few things and DeepSeek is a little bit better at a couple of things. But the point is, they’re apparently very close in performance and DeepSeek is significant cheaper (I’ve seen figures of 20x cheaper rather than 30x, but even that’s still a massive difference). And the open-source aspect can’t be discounted either. People complain that the Chinese version has censorship, but you can just download the source code and make your own version without the censorship.
But ultimately, it won’t matter how cheap AI is if it can’t do what you want it to. If all it does is churn out broke, buggy code, there’s no use case for it. We’ll see I suppose.