Anthropic and SpaceX just shook hands. Here's why that matters.
Rival companies. Competing visions. One very large supercomputer.
The headline A deal nobody saw coming
Elon Musk spent the first few months of this year calling Anthropic misanthropic. In February he wrote on X that the company "hates Western civilisation." Then last week, he sat down with senior members of the Anthropic team, came away impressed, and signed a compute deal that hands Claude access to one of the most powerful AI supercomputers on the planet.
That's quite a turnaround.
On 6 May, Anthropic announced an agreement to use the full compute capacity of SpaceX's Colossus 1 data centre in Memphis, Tennessee. Over 220,000 NVIDIA GPUs. More than 300 megawatts of new capacity. Online within the month.
The most immediate effect: usage limits for Claude Pro and Max subscribers are going up. The five-hour rate caps that have been a persistent frustration for heavy users are being lifted. That's not a small thing.
The context: Why compute matters this much
If you're not deep in the AI infrastructure world, the numbers in these announcements can start to blur together. Gigawatts, GPU counts, megawatts of capacity - it all sounds like someone reading out a spec sheet.
But here's what it means in practice.
Anthropic has been struggling to keep up with demand. Claude's usage has been growing faster than the company could provision compute to serve it. Rate limits weren't there because Anthropic wanted to restrict access - they were there because there physically wasn't enough capacity to go around. Developers building on the API hit ceilings. Pro subscribers hit them too.
This deal - along with the earlier agreements with Amazon, Google, Microsoft and NVIDIA - is Anthropic closing that gap. The SpaceX partnership alone gives them enough capacity to power over 300,000 homes. That's not incremental. That's a step change.
The strange part: Strange bedfellows
The politics of this are worth acknowledging, because they're genuinely odd.
Musk merged SpaceX with xAI earlier this year. xAI runs Grok, a direct competitor to Claude. SpaceX's Colossus 2 is now where xAI does its own training - which is why Musk said he was comfortable renting out Colossus 1. He'd already moved on. But he also added, in a post on X, that SpaceX reserves the right to reclaim the compute if Anthropic's AI "engages in actions that harm humanity."
Make of that what you will.
What's notable is that Musk, after spending months publicly hostile to Anthropic, apparently spent a week with their team and changed his assessment. "Everyone I met was highly competent and cared a great deal about doing the right thing. No one set off my evil detector," he wrote. It's a strange compliment. But it's also a public signal that the deal is genuine, not a grudging transaction.
For Anthropic's part, the announcement was straightforward and commercial. More compute, better limits for subscribers, capacity coming online fast.
The bigger picture: What this signals for Claude users
Announcements like this tend to get covered as industry news - who's partnering with whom, what the dollar figures are. That's fair. But for people who use Claude day-to-day, the practical implications are more interesting.
Anthropic has signed five major compute agreements in a short window: Amazon (up to 5 gigawatts), Google and Broadcom (5 gigawatts), Microsoft and NVIDIA ($30 billion of Azure capacity), a $50 billion investment through Fluidstack, and now SpaceX. That's not a company hedging its bets on infrastructure. That's a company preparing for an order-of-magnitude increase in usage.
The "dreaming" feature announced at the same developer conference is worth noting too. The tool lets Claude review work between sessions, spot patterns, and update stored context. It's early - research preview only - but it points to where the product is heading: an AI assistant with meaningful continuity, not one that forgets everything the moment you close the tab.
Taken together, these moves suggest Anthropic expects demand to keep accelerating, and is building the infrastructure to meet it.
The bottom line: Good news, with caveats
For Claude users, this is straightforwardly positive. More capacity means fewer rate limits, more consistent performance during peak hours, and more room for the kind of heavy usage that's genuinely useful - long coding sessions, complex document work, multi-step agent tasks.
For the AI industry, it's a reminder that compute is still the binding constraint. Models are advancing fast, but running them at scale requires infrastructure investment that very few organisations can fund. Anthropic's string of partnerships - including one with a company whose CEO publicly called them misanthropic three months ago - shows how much the competitive landscape is shifting.
Whether Musk's "evil detector" proves a reliable gauge of Anthropic's character is a separate question.
The compute, at least, is real.
Steve Lavine is a full-stack developer and founder of Lavine Web & AI Solutions, working with SMEs and startups across the UK. lavine.dev