A CMS misconfiguration exposed nearly 3,000 internal Anthropic documents on Thursday, revealing a new model called Claude Mythos, a new tier called Capybara, an invite-only CEO summit in Europe, and a draft blog post warning the model poses "unprecedented cybersecurity risks." Anthropic confirmed the model exists and called it "a step change" in capabilities. The same day, a federal judge blocked the Pentagon's supply chain risk designation against the company, and Bloomberg reported Anthropic is eyeing an October IPO at over $60 billion.
That is a lot of narrative to absorb in 24 hours.
The Deep Dive
Here's what actually happened: a default setting in Anthropic's content management system left uploaded assets publicly accessible and searchable. A cybersecurity researcher at Cambridge and a senior AI security researcher at LayerX independently found the exposed data. Fortune broke the story Thursday evening. Among the files were draft blog posts describing Claude Mythos as part of a new model tier called Capybara, positioned above Opus as larger, more capable, and more expensive. The draft warned that Mythos is "currently far ahead of any other AI model in cyber capabilities" and said the company wanted to act with "extra caution" before release.
Anthropic's response was measured. A spokesperson confirmed the company is "developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity" and is testing it with "a small group of early access customers." They didn't deny the leaked details. They didn't panic. They reframed.
I want to be careful here because I genuinely don't know whether this was a real security lapse or something more choreographed. CMS misconfigurations happen constantly, and the irony of a company whose next model specializes in cybersecurity getting caught by a misconfigured CMS is almost too perfect. But there's a version of this that's exactly what it looks like: an engineer set a default wrong, and a researcher found it.
What I do know is the timing. This leak landed on the same day Anthropic won a major legal victory against the Pentagon, with Judge Rita Lin issuing a 43-page ruling that called the government's supply chain risk designation "classic illegal First Amendment retaliation." It landed on the same day Bloomberg reported Anthropic is considering going public as early as October. And it landed during a week when the company's run-rate revenue reportedly topped $19 billion, more than double what it was three months ago.
If you're warming up investor conversations, winning headline court battles, and growing revenue at that rate, a "leaked" model announcement that generates two days of free press coverage without any formal capability commitments is about as good as it gets. You don't have to believe it was deliberate to notice that every piece of this week served Anthropic's positioning perfectly.
The broader point stands regardless of intent: AI companies are managing perception constantly, and they're exceptionally good at it. Funding rounds require momentum. IPOs require narrative. And sometimes the best narrative is the one that doesn't look like marketing at all. When something in this industry looks too clean to be an accident, you should at least raise an eyebrow.
Also Worth Knowing
A federal judge blocked the Pentagon's effort to designate Anthropic a supply chain risk, ruling that the government was retaliating against the company for refusing to allow its AI to be used in autonomous weapons or mass surveillance. Judge Lin's ruling was blunt, writing that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government." The Pentagon's CTO called the ruling "a disgrace" and says the designation still stands under a separate statute being litigated in D.C. This is far from over, but the first round went decisively to Anthropic, and it matters because the supply chain risk label was forcing defense contractors to cut ties with the company entirely.
Block, the fintech company behind Square and Cash App, cut roughly 4,000 employees in early March, reducing its workforce by about 40%. CEO Jack Dorsey explicitly attributed the cuts to AI, writing that the reductions were "not driven by financial difficulty, but by the growing capability of AI tools to perform a wider range of tasks." Oracle is planning its own cuts of 20,000 to 30,000 roles to free up cash for AI data center expansion. I find the honesty in Dorsey's framing notable regardless of whether you believe it. Most companies use AI as a vague justification for cost-cutting driven by other factors. Block is saying the quiet part out loud, and it's worth paying attention to which companies follow.
OpenAI launched GPT-5.4 in early March, positioning it explicitly for professional work. Internal benchmarks show an 83% success rate on real-world job tasks versus 70.9% for GPT-5.2, with particular strength in long documents, spreadsheets, and legal analysis. It also includes native computer-use capabilities, meaning the model can navigate software interfaces by interpreting screenshots and issuing commands. If you haven't tested it against your actual workflows, you should. The gap between 5.2 and 5.4 is meaningful for production use.
The Builder's Take
This was an Anthropic week, and the signal is loud if you know how to read it.
Three things happened simultaneously: a leaked next-generation model, a major court victory, and credible IPO reporting. Whether the leak was an accident or not, the combined effect is a company that looks like it's winning on capability, winning on principle, and preparing to go public from a position of strength. That is a textbook pre-IPO narrative arc.
Here's what I'd take from this if you're building: maintain provider diversification and test aggressively. Mythos is coming, and if the leaked benchmarks hold, it will reset expectations for what a frontier model can do in cybersecurity and coding. GPT-5.4 already raised the bar this month. The competitive pressure between Anthropic and OpenAI is producing real capability gains on a quarterly cadence, and if you're locked into a single provider, you're leaving performance on the table.
More practically: don't get distracted by the narrative layer. I use Anthropic's tools every day, and I'll keep using them. I also use OpenAI's tools and open-weight models. The companies behind these tools are playing a capital markets game that operates on a completely different level from the question of which model is best for your actual work. Allocate trust based on what performs in your workflows, not on which company had the best news cycle.
And when the next "accidental" leak drops from any of these companies, ask yourself who benefits, what's the timing, and what else is happening that week. Not because you should be cynical about everything, but because informed buyers make better decisions than captive audiences.
Keep building,
— JW