AI Industry Power, Platform Shifts, and the Rising Stakes of Building at the Frontier
A cluster of developments across hardware, infrastructure, enterprise security, and public accountability is reshaping how AI is built, deployed, and governed. Executives, regulators, and the public are all recalibrating their relationships with the companies and individuals driving the technology.
Apple Moves Toward Functional Smart Glasses
According to Bloomberg's Mark Gurman, Apple is actively testing four distinct frame designs for smart glasses with a target launch of 2027. The designs include oval and circular options across multiple sizes and colorways—black, blue, and light brown among them. Critically, the devices will not feature augmented reality overlays or mixed-reality displays. Functionality is expected to mirror Meta's Ray-Ban smart glasses: photo and video capture, call handling, music playback, and integration with Apple's AI-enhanced Siri.
The discussion frames this as a strategic retreat from the Vision Pro's positioning. The Vision Pro headset, Apple's high-end spatial computing device, is characterized as a commercial disappointment with limited consumer adoption. The smart glasses play is interpreted as Apple accepting market reality—prioritizing accessible, wearable AI hardware over immersive but niche mixed-reality experiences. Meta's Ray-Ban glasses are cited as proof that the category has genuine consumer demand.
Vercel's AI-Driven Growth Signals Infrastructure Winners
Vercel, the web deployment and hosting platform, has seen its annual recurring revenue grow from approximately $100 million at the start of 2024 to a run rate of $340 million by late February of this year—a more than threefold increase in roughly 14 months. CEO Guillermo Rauch has described the company as "very much a working public company" that is "ready and getting more ready every day" for an IPO.
A particularly significant data point: 30% of the applications currently running on Vercel's platform were deployed by AI agents rather than human developers. The discussion attributes this directly to the tight integration between Vercel and AI coding tools such as Claude Code, which reportedly recommends Vercel as a default deployment target and can autonomously handle domain configuration, server setup, and database provisioning. The broader implication is that infrastructure companies with strong AI tool integrations—Vercel, Supabase, and similar platforms are mentioned—are emerging as structural beneficiaries of the AI development wave, regardless of which foundation model wins market share.
Anthropic's OpenClaw Ban Exposes Open Source Tensions
Anthropic temporarily suspended the account of Peter Steinberger, creator of OpenClaw—an open-source AI coding tool that gained significant traction—citing "suspicious activity." The suspension followed a policy change in which Anthropic's Claude subscriptions no longer cover usage through third-party tools like OpenClaw; users must now pay separately via the API.
The discussion contextualizes this as a subsidy problem rather than a principled stance against open source. The $200-per-month Claude subscription is described as covering usage that would otherwise cost thousands of dollars in API credits, and Anthropic is characterized as unwilling to extend that subsidy to third-party tool users. The episode notes that Steinberger was subsequently hired or acquired by OpenAI, adding a competitive dimension. The incident generated significant backlash and surfaces an unresolved tension between proprietary platform economics and the open-source developer ecosystem that has been central to AI adoption.
Banks Testing Anthropic's Mythos Model Amid Legal Conflict
Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened a meeting with major bank executives and encouraged them to test Anthropic's Mythos model—a capability-focused model that Anthropic has not publicly released, citing safety concerns—for detecting cybersecurity vulnerabilities. JPMorgan Chase was already a named partner; Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are now reported to be evaluating it as well.
The discussion highlights a notable contradiction: Anthropic is simultaneously in a legal dispute with the Trump administration after the Department of Defense designated the company a supply chain risk, stemming from Anthropic's insistence on restricting certain military applications of its models. The administration's encouragement of Mythos adoption in the financial sector appears to proceed on a separate track from that conflict. UK financial regulators are also examining Mythos, though from a different angle—concerned about what the model's unreleased capabilities imply about the vulnerability landscape it can exploit.
Sam Altman, the New Yorker Profile, and a Turning Point in Public Perception
A Ronan Farrow and Andrew Martins investigation in The New Yorker, drawing on more than 100 sources, painted a critical portrait of OpenAI CEO Sam Altman. Sources described him as possessing "a relentless will to power" distinguishing him from peers; one anonymous former OpenAI board member characterized him as combining a strong desire to be liked with what they called a "sociopathic lack of concern for the consequences of deceiving someone."
Days after publication, Altman's San Francisco home was attacked with a Molotov cocktail. No injuries were reported; the suspect was later arrested at OpenAI headquarters while making additional threats. Altman published a public response acknowledging personal flaws, specifically citing conflict aversion that led to dishonesty, and referencing the 2023 board crisis—when he was briefly removed and then reinstated as CEO—as something he handled badly. He also introduced what he called "ring of power dynamics," describing how the prospect of controlling AGI leads individuals and institutions to extreme behavior, and argued the solution lies in broadly distributing the technology rather than concentrating it—a statement the discussion notes sits in tension with OpenAI's evolution from open-source nonprofit to the world's most valuable private company.
The discussion treats the convergence of the profile, the attack, and Altman's response as a potential inflection point in how the public relates to AI's leading figures—less as neutral technologists and more as political actors with significant power and contested accountability.
---
Key takeaways:
- Apple's 2027 smart glasses strategy represents a deliberate pivot away from premium mixed-reality hardware toward accessible, AI-integrated wearables modeled on Meta's proven Ray-Ban approach.
- Vercel's revenue tripling in roughly 14 months—with 30% of platform deployments now agent-generated—illustrates how AI coding tools are creating structural advantages for infrastructure providers with deep API integrations.
- Anthropic's suspension of OpenClaw's creator signals that subsidized consumer subscriptions and open-source third-party tooling are on a collision course as AI platforms mature and seek sustainable unit economics.
- The U.S. government's parallel tracks of encouraging Anthropic's Mythos model for financial sector security testing while simultaneously pursuing legal action against the company reflects the fragmented and often contradictory nature of current AI governance.
- The attack on Sam Altman's home and the scale of sourcing behind the New Yorker investigation suggest that AI's leading figures are entering a new phase of public and political scrutiny, with reputational and physical risks that did not exist at this intensity even two years ago.