Anthropic's Revenue Surge, World Model Competition, and the Quantum Cryptography Countdown
Three developments are reshaping the competitive landscape of enterprise AI: Anthropic has surpassed OpenAI in revenue run rate, a new Bezos-backed venture is racing to build AI grounded in physical-world understanding, and cryptography experts are issuing their most urgent warnings yet about quantum computing timelines. Technology leaders and enterprise security teams should pay close attention to all three.
Anthropic Overtakes OpenAI in the Enterprise Race
Anthropic's annual revenue run rate has crossed $30 billion, up from $9 billion at the end of last year and $19 billion as recently as late February—implying the company added roughly $10 billion in annualized revenue in a single month. More than 1,000 business customers are now spending over $1 million annually on Claude services, a figure that has more than doubled since February alone.
The discussion attributes this acceleration to a deliberate focus on coding models and enterprise contracts. Once customers are locked into year-long agreements, switching costs are high—a structural advantage that compounds with each new contract signed. Claude Code, Anthropic's coding-focused product, is cited as shipping new features at an unusually high velocity.
To support this growth, Anthropic has signed a long-term agreement with Google and Broadcom for access to approximately 3.5 gigawatts of computing capacity using Google's Tensor Processing Units (TPUs)—custom AI chips that serve as an alternative to Nvidia's hardware. The supply agreement runs through 2031, with capacity deployment beginning in 2027. Broadcom separately projects its AI chip sales will exceed $100 billion next year, positioning it as a meaningful rival to Nvidia.
The revenue trajectory raises two significant competitive questions. First, whether Anthropic could reach profitability by end of next year—a possibility the discussion floats based on current growth curves. Second, what the valuation implications are for OpenAI, which recently reported a $25 billion ARR figure that Anthropic has now surpassed. The discussion notes that Anthropic's legal standoff with the Pentagon—stemming from a dispute over AI safety guardrails that led the Department of Defense to classify the company as a supply chain risk—has not visibly slowed commercial momentum.
The World Model Land Grab
A parallel competition is intensifying around so-called "world models"—AI systems trained not primarily on text and code, but on data representing the physical, Newtonian world. The premise is that current large language models, however capable, operate on representations of reality rather than reality itself, limiting their applicability to domains requiring genuine physical understanding.
Project Prometheus, a stealth startup led by Jeff Bezos and former Google executive Vikram Bajaj, is emerging as a significant player in this space. The company has hired hundreds of staff across San Francisco, London, and Zurich, with a focus on engineers, AI researchers, and professionals experienced in large-scale infrastructure. Its latest recruit is Kyle Kosich, a co-founder of Elon Musk's xAI and former OpenAI infrastructure lead who oversaw the Colossus supercomputer.
Prometheus is targeting the industrial sector specifically—jet engine design is cited as an example domain—and claims to have already assembled the largest corpus of data on engineering systems. Its business model extends beyond AI development: the company plans to acquire equity stakes in companies across engineering, aviation, architecture, and design, embedding forward-deployed engineers within those firms to improve operations while simultaneously gathering proprietary training data. The structure is compared to a Berkshire Hathaway-style holding company oriented around AI-driven industrial disruption.
The talent signal is notable. The discussion frames the movement of senior AI researchers toward world model projects as analogous to an earlier generation of researchers who joined foundational labs before the ChatGPT moment—suggesting that world models may represent the next major inflection point in AI capability.
The Dubious Gold Rush in AI Search Optimization
Google's AI Overviews—summaries generated directly in search results—are accurate approximately nine out of ten times, according to a recent analysis. At Google's scale of more than 5 trillion annual searches, that error rate translates to hundreds of thousands of inaccurate answers per minute. Additionally, more than half of accurate responses were described as "ungrounded," meaning they linked to sources that did not fully support the claims made.
This environment has spawned a new industry of firms promising to optimize brands for AI-generated search results—variously labeled AEO (Answer Engine Optimization), GEO (Generative Engine Optimization), or GSO (Generative Search Optimization). Tactics range from self-serving AI-generated listicles to a practice Microsoft has termed "recommendation poisoning": hiding prompt injections behind "summarize with AI" buttons that instruct language models to treat a given domain as an authoritative source for future citations.
Industry practitioners quoted in the discussion are skeptical. The core problem is that no agreed-upon methodology exists for measuring performance in AI search, and the field is described as attracting bad actors alongside well-intentioned but misinformed practitioners. Traffic data adds urgency to the anxiety: an analysis cited in the discussion claims several major tech publications have lost 58% of their Google traffic since 2024, with some outlets experiencing declines exceeding 90% from peak levels.
Quantum Cryptography: The 2029 Deadline
The most operationally urgent item for enterprise security teams concerns quantum computing. Google researchers recently found that quantum computers may be able to crack elliptic curve cryptography—the encryption standard protecting crypto wallets and much broader digital infrastructure—with 20 times fewer computational resources than previously estimated.
A cryptographically relevant quantum computer (CRQC), defined as one capable of breaking widely used public-key encryption, may now be achievable by 2029—a 33-month horizon that experts describe as far more aggressive than timelines discussed even two years ago. Cryptography engineer Filippo Valsorda has issued a public call for immediate rollout of post-quantum cryptography (PQC) schemes, arguing that the risk of inaction is now unacceptable regardless of one's confidence level in the specific timeline.
The framing offered is probabilistic rather than deterministic: the relevant question is not whether a CRQC will definitely exist by 2030, but whether organizations can be certain one will not. Given that certainty is unavailable, the argument is that waiting for consensus amounts to an unacceptable bet against users' security. One expert draws a parallel to the period when nuclear fission research moved out of public view—a signal that sensitive timelines are compressing faster than public discourse reflects.
The practical implication is immediate: organizations should begin deploying available post-quantum cryptographic exchanges now, even though protocol adaptation for larger signature sizes remains incomplete. Waiting for a more elegant solution, the discussion argues, is no longer viable if the target completion date is 2029.
---
Key takeaways:
- Anthropic has surpassed OpenAI in annualized revenue run rate at $30 billion, driven by enterprise coding contracts and high switching costs; its Google/Broadcom TPU deal signals both accelerating compute demand and a strategic shift away from Nvidia dependency.
- Project Prometheus represents a serious, well-funded bet that physical-world AI ("world models") is the next major frontier, with an industrial-sector focus and a business model that combines AI development with equity acquisition in target industries.
- The AI search optimization market is generating significant noise and some genuinely harmful practices, including prompt injection tactics; enterprises should treat vendor claims in this space with substantial skepticism until measurement standards mature.
- The quantum cryptography threat has moved from theoretical to operationally urgent: a 2029 deadline for cryptographically relevant quantum computers is now being cited by credible experts, requiring immediate enterprise action on post-quantum cryptography migration.
- Talent flows—from xAI, OpenAI, and other established labs toward world model startups—are a leading indicator of where the next wave of AI capability investment is concentrating.