🤖 Anthropic is Winning by Trying to Lose

How treating AI like plutonium became the best enterprise strategy in tech

Image by Nano Banana Pro. Prompt by Claude Opus 4.5.

There is a specific kind of anxiety reserved for a scientist who realizes their experiment has succeeded too well.

Anthropic is on track to deliver $70bn of revenue by 2028 and is targeting a $1T+ IPO this year. You'd think the CEO Dario Amodei would be ecstatic.

And yet, watching his interview at Davos, reading his recent letter, I see a man almost tortured by having to deliver success so quickly. What keeps him up at night isn't winning the AI race. It's what happens when they (or someone else) does.

Dario didn't build a company to dominate the NASDAQ. He built a containment vessel for intelligence. The irony is that by treating AI as dangerous industrial material rather than a magical toy, he accidentally built the only product safe enough for the Fortune 500.

The Numbers That Would Make Ordinary Capitalists Ecstatic 

Anthropic hit $10 billion ARR by the end of 2025 and has set a target of $20-26 billion for 2026. But the top-line number hides the real story: the quality of that revenue.

  • API Dominance: Access to models hit $3.8 billion (vs $1.8bn for OpenAI). Developers are voting with their wallets for stability over hype.

  • Claude Code: Close to generating $1 billion in annualized revenue alone.

  • $500M from Microsoft: Just to let Azure customers access Claude. Think about that: the owner of OpenAI is paying Anthropic half a billion dollars for a hedge.

  • Profitability: They project being the first profitable major AI lab, beating OpenAI to the punch.

  • Cap Table: Aligned with Big Tech strategically—Google (14%), Amazon (20%), NVIDIA ($5B stake)—but beholden to none of them.

Perhaps most impressively, they did all of this with zero founder exits (in a valley of mercenaries, all 6 founders are still there), no weird IP lawsuits (just quiet checks written for data), and no ads (enterprise revenue is delivering so hard they don't need to turn you into a product).

The Product King: Cooking with Taste

Anthropic is cooking.

Following Cowork and the Claude Code Holiday season, they just released Claude in Excel. They're not the first to put AI in a spreadsheet, but they're the first to do it well.

Why did this go viral? Because they did something subtle but profound. Google and Microsoft put a chatbot in a spreadsheet that tries to help you use the software ("Make a pivot table"). Anthropic put an agent in the software that helps you solve the problem ("Analyze this P&L and highlight the variance").

Whether it's Cowork (their new admin agent), Claude Code, Excel, or Slack, Asana or anything else you use daily. Anthropic focuses on the workflow, not the benchmark. While xAI and OpenAI obsess over "bench-maxxing" (getting 0.5% better on a math test), Anthropic competes on user experience—and still casually wins the benchmarks anyway.

And you can see this in the general reaction on social.

Everyone in Tech had their "Claude Code moment" during the winter holidays. Just about every nerd I know built a hobby project over the break. I built a stablecoin dashboard in an hour that would have taken me hours to spec and days to build before.

Technically, Codex might be better in some situations. But for most people it's not as wow.

  • Claude Code is like a Product Founder. It's the perfect agent for the blank page. It reads the docs, plans the architecture, and gets you from 0% to 70%.

  • OpenAI's Codex is like a Staff Engineer. Phenomenal at following direct instructions with surgical precision. It gets you from 70% to 90% and optimizes the hell out of it.

Shots Fired at Davos

Machiavelli once said that the 2nd and 3rd in line to power should gang up to attack the current leader. Dario Amodei and Demis Hassabis gave a masterclass in this principle at Davos, positioning themselves as "team research" instead of "team social media founders."

He continues:

There's a long tradition of scientists... thinking of themselves as having responsibility for the technology they built. Not ducking responsibility."

Dario Amodei

Dario's worldview is stark. He believes we are approaching a loop where AI does the research and writes the code. He thinks we are 6-8 months away from AI doing everything a software engineer does.

The Scientists are out-Capitalisting the Capitalists.

And they feel like they have to. Dario: "I wish we could slow down. But we're in a competition. So let's win the right way." Demis largely agrees: "We don't need to do ads."

This "No Ads" stance is a direct shot at consumer-focused models. If you rely on ads, you rely on engagement. If you rely on engagement, you eventually sacrifice truth for clicks. By rejecting the ad model, Anthropic protects the integrity of the model's output—which is exactly what enterprise buyers want.

The Constitution is a Fascinating Document

The Claude Constitution is often discussed as a safety mechanism. It's actually a brilliant product spec.

And it’s so very different from OpenAI’s approach for making their models better. While they too have a system prompt, they’re not as direct about how it impacts the product.

  • OpenAI's method (RLHF) is based on goals: train the model on what people liked, reinforce that through benchmarks, and the human feedback loop.

  • Anthropic's method (Constitutional AI) is based on principles—train the model to follow written principles first and foremost.

Which, given it's a large language model, is stunningly obvious once you think about it for a microsecond.

And if you read the Constitution, there's a very clear, thoughtful set of directions. Things like:

  • "The document is written with Claude as its primary audience."

  • "We discuss Claude in terms normally reserved for humans (e.g. 'virtue,' 'wisdom'). We do this because we expect Claude's reasoning to draw on human concepts."

  • "Anthropic believes it's better to have safety-focused labs at the frontier than to cede that ground to developers less focused on safety."

(As an aside: I’d encourage you to read his whole letter if you can spare the time to look at how much thought is going into the steering of this new, powerful type of intelligence.)

The Inverse of OpenAI

Anthropic is on track for profit, doesn't have to sell ads, and seems to always have a little more taste, finesse, and je ne sais quoi in its model outputs. While other startups have burned out chasing consumer hype, Anthropic has stood the test of time by refusing to become a consumer brand.

They are winning the race by trying to run it as slowly as possible.

There's an argument that in the age of AI, principles and taste are your moat. Anthropic is the clearest example. Their hats, brand, and products have captured the cultural zeitgeist while also lobbying for AI regulation and limiting chip exports.

This is a company that could have been marred by culture wars and accused of being woke. Instead, the internet (and X crowd specifically) stands up and cheers like they're at a pro wrestling event every time Anthropic drops something new.

The next decade won't be defined by raw intellectual horsepower. We'll all have that in spades.

It will be defined by what you choose to do with it.

Anthropic chose to build the brakes before the engine.

Turns out that's a pretty good business model.

ST.

Enjoyed this? Why not subscribe or share the newsletter?