Podcast Guide
Cover art for The Jaeden Schafer Podcast

Meta Manus Desktop App, Anthropic Enterprise Lead, OpenAI AWS Deal

The Jaeden Schafer Podcast

Published
March 18, 2026
Duration
12:10
Summary source
description
Last updated
Apr 25, 2026

Discusses openai, anthropic, agents.

Summary

In this episode, we discuss Meta's new Manus desktop AI agent and its implications for AI as an operating system layer. We also cover startup Niv AI addressing AI data center energy consumption, Memories AI's visual memory layer for robotics, and Anthropic's dominance in new enterprise AI spending over OpenAI, which is now strengthening its government foo…

Meta's Manus hits desktop, Anthropic dominates enterprise AI spend, a startup builds visual memory for robots, and OpenAI quietly embeds itself into U.S. government infrastructure via AWS.

Key takeaways

  • Anthropic has captured over 70% of new enterprise AI spend according to Ramp data, outpacing OpenAI in B2B adoption despite OpenAI's larger consumer user base and comparable revenue trajectory.
  • OpenAI's new AWS GovCloud deal positions them inside the default government procurement pipeline, directly challenging Anthropic on what was considered Amazon/AWS home turf and signaling enterprise-grade security credibility.
  • AI agents are moving from cloud-based interfaces to local desktop environments (Meta/Manus) and physical systems (Memories AI for robotics), marking a structural shift toward agents that access files, run apps, and retain visual memory.

Why this matters

The AI competitive landscape is consolidating around distribution infrastructure and trust rather than model capability alone, meaning enterprise procurement decisions will increasingly be shaped by government contracts, cloud partnerships, and security compliance rather than benchmark performance.

Entities

Strategic Intelligence Report
The Three-Layer AI Battle: Infrastructure, Enterprise Spend, and Agent Deployment Four concurrent developments — desktop AI agents, enterprise market share shifts, physical AI memory, and government distribution deals — are reshaping competitive dynamics across the AI industry simultaneously. Executives and enterprise technology buyers should pay close attention to which companies are winning not just on model capability, but on distribution control, energy resilience, and trust infrastructure.

The Desktop Agent Inflection Point

Meta's Manus, a recently acquired firm with origins as a Chinese AI company, has launched a desktop application that moves its AI agent from cloud-hosted interfaces directly onto users' local machines. The discussion frames this as a categorical shift: agents that previously operated in the cloud can now access local files, run applications, organize data, and build software within a user's environment. The competitive reference point is OpenClaw (likely "Claude" desktop or a similar product), whose viral adoption reportedly demonstrated the value of local-environment integration. The core argument is that this represents AI transitioning from a query-answering layer to an operating system layer — one that performs work inside the machine rather than responding to prompts about it. The trade-off is explicit: greater capability comes with meaningful security and privacy exposure. Meta's track record on data privacy is cited as a specific concern that may limit enterprise and consumer uptake, particularly for users who would now be granting an AI agent broad access to their local environment.

Energy as a Structural Constraint

A startup called NIV AI has raised funding to address what the discussion characterizes as an underappreciated bottleneck: power consumption in AI data centers. GPU workloads spike unpredictably, forcing operators to either throttle usage or overpay for backup capacity. NIV AI is building real-time monitoring and optimization systems — described as a "co-pilot for data center energy" — to address this inefficiency. The broader framing is that AI is not purely a software problem; it is an energy problem. The discussion connects this to current geopolitical conditions, noting that energy price shocks linked to instability in Iran are creating material risk for AI companies whose infrastructure depends on large, continuous power draw. Some AI training facilities are reportedly being designed with attached power generation capacity given the scale of consumption. Companies that can extract more compute output per unit of energy will hold a structural cost advantage as these pressures intensify.

Visual Memory as a Foundation for Physical AI

Memories AI is building what the discussion describes as a visual memory layer for AI systems operating in the physical world — robots, wearables, and real-world autonomous systems that need to recall what they have seen over time, not just what they have been told in text. This is distinct from the conversational memory that products like ChatGPT already offer. The use case centers on robotics: as humanoid robots (examples cited include Optimus and Figure, projected at $20,000–$30,000 consumer price points) enter warehouses and eventually homes, they will require persistent, transferable memory of physical environments and tasks. The discussion raises the possibility of memory portability across robot generations — allowing a household robot's learned context to transfer to a successor model — and frames this as a potential switching-cost moat for robot manufacturers. The technology is described as early-stage but directionally significant.

Enterprise Spend and the Anthropic-OpenAI Reversal

According to data from financial platform Ramp, Anthropic now captures over 70% of new enterprise AI spend — a significant shift from what was recently described as a tight race with OpenAI. The primary driver appears to be enterprise adoption of AI coding tools, alongside general Claude subscriptions. OpenAI is reportedly pivoting its strategy to place greater emphasis on enterprise after heavy investment in consumer products. The revenue picture complicates the narrative: OpenAI is reportedly on pace for approximately $25 billion in annual revenue, while Anthropic is tracking toward roughly $19 billion — closer than the enterprise spend gap might suggest. OpenAI's consumer base is described as approaching 900 million weekly active users, a scale Anthropic does not match. The discussion frames enterprise spend as the more meaningful competitive scoreboard for long-term platform positioning.

OpenAI's Government Distribution Play

OpenAI has signed a distribution agreement with AWS to deliver AI products into U.S. government environments, including GovCloud and classified infrastructure. The strategic significance goes beyond a standard partnership: AWS already holds deep, trusted relationships across federal agencies and is the incumbent compliant infrastructure layer for government cloud workloads. The discussion notes a complicating dynamic — AWS has invested billions in Anthropic and Claude is deeply embedded in AWS's AI platform. OpenAI is effectively entering Anthropic's established distribution territory. The argument is that government adoption functions as a credibility signal for enterprise buyers: clearance for classified and sensitive workloads translates into trust for security-conscious enterprise procurement. Critically, OpenAI retains model deployment control and direct customer coordination within the AWS arrangement, meaning it is using AWS as a distribution channel without ceding product governance. Key takeaways: - The competitive frontier in AI is shifting from model quality to distribution control, with government procurement acting as a trust signal that cascades into enterprise sales cycles. - Anthropic's capture of 70%+ of new enterprise AI spend (per Ramp data) represents a meaningful market share reversal, even as OpenAI maintains a larger consumer base and higher absolute revenue trajectory. - Desktop AI agents represent a qualitative capability expansion — from cloud-based query tools to local operating system participants — with corresponding security and privacy risks that will influence enterprise adoption rates. - Energy infrastructure is emerging as a hard constraint on AI scaling; startups and operators that optimize power efficiency at the data center level will hold durable cost advantages, particularly under sustained energy price pressure. - Visual memory for physical AI systems is an early but strategically important layer: as robotics enters commercial and consumer environments, persistent and transferable environmental memory may become a significant switching-cost mechanism between hardware platforms.

Show notes

In this episode, we discuss Meta's new Manus desktop AI agent and its implications for AI as an operating system layer. We also cover startup Niv AI addressing AI data center energy consumption, Memories AI's visual memory layer for robotics, and Anthropic's dominance in new enterprise AI spending over OpenAI, which is now strengthening its government footprint with an AWS deal.Chapters00:00 Introduction and AIbox Update01:26 Meta's Manus Desktop AI Agent02:55 Niv AI Tackles Data Center Energy04

Themes

  • openai
  • anthropic
  • agents