Anthropic, OpenAI, and the SaaS Reckoning: Power, Principles, and the Limits of Private Capital
Three interlocking stories—Anthropic's confrontation with the Pentagon, OpenAI's $110 billion private raise, and Block's 40% workforce reduction—illuminate a broader inflection point for enterprise technology: the collision between AI-era valuations, state power, and the structural deterioration of legacy SaaS growth.
Anthropic's Pentagon Rupture: Principles as Organizing Strategy
The discussion centers on Anthropic's $200 million contract with the Department of Defense, which collapsed after Anthropic sought to impose two restrictions on model use: prohibitions on "mass surveillance" and on autonomous weapons. The Pentagon's counter-position was that it required the right to do anything "legal." The bid-ask spread did not close, and the Pentagon broke off negotiations, threatening to declare Anthropic a supply chain risk—a designation that could prevent other government vendors from using Anthropic's technology, potentially a far more damaging outcome than losing a single contract.
The analysis frames Dario Amodei's position as less a principled stand than a labor-retention necessity. Anthropic's founding identity is built around AI safety as a "messianic" organizing principle—one credited with keeping its seven co-founders intact while competitors have lost nearly all of theirs. The argument is that Amodei could not credibly tell his team that making weapons "safer" than competitors justified the contract, and that maintaining workforce unity required him to walk away. The conclusion: he was right to exit, but was naive to have entered the relationship at all.
The deeper structural point is about state power. The discussion invokes the Weberian concept of the state's monopoly on legitimate violence: the Department of Defense is constitutionally empowered to defend the country, and a privately held AI company cannot realistically impose conditions on how that mandate is executed. The historical parallel offered is the Manhattan Project—military leadership humored scientists but excluded them from the ultimate decision to use the bomb. The lesson applied here is that Anthropic drifted into a position of trying to interpose its judgment over a constitutional authority, and the state has tools—the Defense Production Act, supply chain risk designations—that dwarf anything a private company can deploy in response.
On the shareholder question: the consumer lift (Anthropic briefly topped the App Store ahead of ChatGPT) is real, but the risk profile has widened. The framing offered is "same expected return, higher variance"—which, by definition, is a worse position for investors. OpenAI's Sam Altman moved quickly to fill the gap, though his own workforce reportedly pushed back immediately, forcing him to announce unilateral contract term changes the following day—itself described as an embarrassing improvisation.
OpenAI's $110 Billion Round: The Limits of Private Capital
OpenAI's latest raise is described as four times the size of the largest IPO in history, and the combined OpenAI and Anthropic fundraising in the current period exceeds total U.S. venture investment for all of last year. The round's structure warrants scrutiny: of Amazon's $50 billion commitment, only $15 billion is upfront; the remainder is contingent on either an IPO or the achievement of AGI. Amazon's own free cash flow has fallen to approximately $11 billion annually due to capital expenditure, meaning it cannot easily fund its full commitment from operating cash.
The discussion argues that the logical conclusion of this capital exhaustion is that the next major financing event for OpenAI, Anthropic, and SpaceX will be public offerings. The private investor base—SoftBank, Nvidia, Amazon—has been largely tapped out. An October IPO at approximately $1.5 trillion is floated as a plausible scenario, with the observation that Amazon's unfunded commitment could effectively serve as pre-sold IPO book.
On the "founder premium" question: the analysis distinguishes sharply between Sam Altman and Elon Musk. Altman's value is attributed to deal-making and fundraising capacity; his departure would likely result in a modest valuation decline (from roughly $800 billion to $600 billion) because the underlying technical organization would remain intact. Musk's premium is assessed as far larger relative to underlying asset performance—Tesla trades at 10–13 times revenue on declining revenue, and without Musk, the robotics and robotaxi theses that justify the valuation would collapse, potentially reducing the company from $1 trillion to $200 billion.
The SaaS Structural Decline: Beyond Vibe Coding
The discussion challenges the prevailing narrative that AI-assisted coding ("vibe coding") is the primary threat to software companies. The argument is that the deeper problem is growth deceleration that predates and exceeds any coding productivity effect. The pattern described: from 2009 to roughly 2020, the public SaaS basket grew at an average of 30% and traded at approximately 6x revenue. COVID produced two years of 40% growth at 20x revenue. Since then, growth has decelerated not back to 30% but to 15% or below, while multiples temporarily held—creating a false sense of normalization. The market's recent repricing reflects a recognition that the deceleration is structural, not cyclical, and AI is the permanent cause.
The MongoDB example is cited: a strong quarterly result accompanied by guidance implying growth falling to the low-to-mid twenties triggered a 20–25% stock decline because the company was trading at 40x forward EBITDA. The math is unforgiving at elevated multiples.
The Block layoff is analyzed as a template for what follows. Block's revenue growth had decelerated from roughly 10% to approximately 3%. The "AI efficiency" framing is described as cover for a straightforward decision: when revenue growth is exhausted, the only remaining lever is cost reduction. The analysis distinguishes sharply between AI as a top-line driver (Salesforce's AgentForce thesis) and AI as an OPEX reduction tool (Block's stated rationale). Block is categorized firmly in the latter, and the 40% reduction is characterized as an implicit abandonment of the growth narrative. The prediction is that other public software companies currently growing in the low-to-mid teens will adopt similar strategies by year-end if they cannot identify a credible re-acceleration path.
The critique of incumbent SaaS leadership is pointed: companies have had 16 months since capable LLMs became widely available, and most have not produced meaningful AI-native revenue. The companies cited as executing credibly—Salesforce, Shopify, Intercom—are distinguished by founder or founder-equivalent leadership willing to cannibalize existing product lines.
---
**Key takeaways:**
- **State power supersedes AI company principles**: Anthropic's Pentagon rupture illustrates that constitutional authorities have enforcement tools—supply chain risk designations, the Defense Production Act—that make conflict with the Department of Defense structurally untenable for private companies, regardless of the merits of their safety positions.
- **OpenAI's capital structure signals imminent IPO**: The exhaustion of the private investor base (Amazon, Nvidia, SoftBank) and the contingent nature of a significant portion of the latest round suggest the next major financing event will be a public offering, likely within 12 months.
- **The SaaS growth problem is structural, not cyclical**: Revenue deceleration to sub-15% growth, combined with AI permanently altering the competitive landscape, means most public software companies face a binary choice: re-accelerate via genuine AI product transformation, or shift to a profitability narrative and cut headcount aggressively.
- **The Block playbook will spread**: Companies growing in the low-to-mid teens that cannot identify AI-driven top-line acceleration by late 2025 are likely to follow Block's model—framing large workforce reductions as AI efficiency gains while implicitly conceding the growth story.
- **Labor holds unprecedented power in frontier AI, nowhere else**: The talent dynamics at Anthropic and OpenAI—where researchers leave eight-figure unvested equity and workforce sentiment can override strategic decisions—represent an extreme inversion of the capital-labor balance seen across the rest of the technology sector.