The headlines are screaming about a $900 billion valuation for Anthropic. Everyone is nodding. "Of course," they say. "It’s the arms race. Compute is the new oil."
They are wrong. They are dangerously, hilariously wrong.
If Anthropic is worth nearly a trillion dollars, then the math of the entire tech economy has fundamentally broken. We aren't looking at a growth story. We are looking at a desperate circular economy where venture capital buys server time from Big Tech, who then reinvests that same capital into the AI startups to keep the valuations high. It is a closed loop of vanity metrics that masks a terrifying lack of unit economic clarity.
The fallacy of the compute moat
The primary argument for these astronomical valuations is the "moat." The idea is that only a handful of companies can afford the clusters required to train the next generation of models. Therefore, they own the market.
This ignores the history of every single commodity in the history of industrialization.
In the early days of the internet, owning your own fiber-optic cables and server farms was a competitive advantage. Then it became a utility. Today, compute is a commodity. When you base a $900 billion valuation on the cost of your infrastructure rather than the margin of your software, you aren't a tech company. You are a utility company with a very expensive R&D department.
Unlike SaaS, where the cost of serving the thousandth customer is near zero, every query to a frontier model costs real money in electricity and silicon wear. The "scaling laws" that the industry worships suggest that models get smarter as they get bigger. They conveniently forget that the costs also scale. We are seeing diminishing returns in model performance while the capital requirements grow exponentially.
If you have to spend $10 billion to get a 5% improvement in reasoning, and your competitor can "distill" that knowledge into a smaller model for $100 million six months later, your $900 billion valuation isn't a moat. It's a target.
Why OpenAI is the wrong benchmark
The logic being used to justify Anthropic’s price tag is simple: "OpenAI is worth X, so Anthropic must be worth Y."
This is the "Greater Fool" theory dressed up in a Patagonia vest. OpenAI has the first-mover advantage and a massive consumer footprint with ChatGPT. Anthropic, while technically impressive with Claude, is playing a game of catch-up in a market where brand loyalty is non-existent.
I have watched dozens of enterprise clients swap their API keys from one provider to another in a single afternoon because of a 10% price drop or a slightly better context window. There is no "stickiness" here. Developers are mercenary. If Llama 4 or a refined Mistral model provides 90% of the utility for 0% of the licensing fee, the $900 billion valuation evaporates.
To justify this price, Anthropic doesn't just need to be a successful company. They need to own the operating system of the future. They need to be the only way people interact with digital intelligence. But the market is fragmenting, not consolidating. Small, specialized models are outperforming generalist giants in specific high-value tasks like legal discovery and medical coding.
The safety tax is a commercial anchor
Anthropic’s core identity is "AI Safety." They were founded by ex-OpenAI employees who felt the rush to market was dangerous. They use Constitutional AI to ensure their models stay within ethical bounds.
As a human, I appreciate this. As an investor, I see a commercial anchor.
In the enterprise world, "safety" often translates to "refusals." I’ve seen companies dump millions into integrating Claude only to find the model refuses to process certain datasets because it flags them as potentially sensitive, even when they are perfectly legitimate business documents.
When your competitor is willing to provide a "rawer," more flexible model, the enterprise will choose the tool that actually does the job over the one that lectures them on ethics. By baking their moral framework into the product, Anthropic has limited their addressable market to the most risk-averse organizations. That is not how you build a trillion-dollar empire. You build a trillion-dollar empire by being the most useful tool, not the most virtuous one.
The talent war is already over
The assumption is that these companies own the "brains." They don't. They rent them.
The researchers responsible for the breakthroughs at Google, OpenAI, and Anthropic are the most mobile workforce in human history. They move for $10 million signing bonuses and the chance to run their own shops. The "secret sauce" isn't a trade secret; it’s published in papers and replicated on GitHub within weeks.
We are seeing a talent dilution. As these companies grow to thousands of employees, the density of genius drops. They become bloated bureaucracies. I’ve talked to engineers inside these "decacorns" who spend more time in meetings about "alignment" than they do writing code. Meanwhile, a team of five people in a garage in Paris or Tokyo is using open-source tools to build models that rival the giants.
The private market is lying to you
Why is the valuation $900 billion? Because the people setting the price have a vested interest in the number being high.
If you are an early investor in Anthropic, you need the next round to be higher to show "markup" to your Limited Partners. If you are a cloud provider like Amazon or Google, you want the valuation high because it validates your own investment and ensures the startup has the "currency" (in the form of high-priced stock) to hire more engineers who will build more things on your cloud.
It’s a giant game of paper-wealth musical chairs. The music stops when someone tries to go public.
The public market does not care about "potential" the way the private market does. The public market wants cash flow. It wants E-BITDA. It wants to know how you are going to defend your margins against a thousand open-source clones.
Stop asking if it's the next Google
People keep asking, "Is this the next Google?"
That is the wrong question. Google succeeded because it had an infinite margin business model (ads) and a monopoly on navigation. AI models, as they currently exist, have shrinking margins and zero monopoly power.
The right question is: "Is this the next Cisco?"
Cisco was the darling of the dot-com bubble. They sold the "shovels" for the gold rush. Their valuation hit the stratosphere because everyone assumed the internet required their specific hardware forever. Then the technology matured. The hardware became a commodity. The valuation crashed and never recovered to its bubble peaks, even though the company stayed profitable.
Anthropic is selling the plumbing. And the world is rapidly learning how to build its own pipes.
The actionable reality
If you are an enterprise leader, do not lock yourself into a single provider. Do not buy the hype that a high valuation equals a superior product. The more a company raises at these absurd levels, the more they are forced to squeeze their customers to justify the math.
- Prioritize Model Agnostic Architecture: Build your systems so you can swap Claude for GPT-5 or an open-source model in a heartbeat.
- Own Your Data, Not the Model: The value is in your proprietary information, not the engine that processes it.
- Beware the "Safety" Lock-in: Ensure the ethical filters of your provider don't become functional barriers to your specific use case.
The $900 billion valuation isn't a sign of Anthropic's strength. It's a sign of a market that has run out of ideas and is now simply betting on the size of the pile.
The crash won't happen because the technology fails. The technology is brilliant. The crash will happen because the business model is a fantasy.
Burn the whitepapers. Look at the balance sheet. If the cost to serve is higher than the value created, it doesn't matter how many billions you raise. You are just a non-profit with a very expensive marketing department.