Project Glasswing: Why Mythos Is Only Available to 40 Organizations
Project Glasswing: Why Mythos Is Only Available to 40 Organizations

Introduction

Claude Mythos finds zero-days everywhere, but you can't use it. Anthropic gave it to 40 organizations through Project Glasswing. Here's how it works and what it means.

On April 7, 2026, Anthropic announced the most capable AI model ever built — and decided not to sell it to anyone.

Claude Mythos Preview, code name "Capybara," is the most advanced AI model for cybersecurity ever created: it finds zero-day vulnerabilities in every major operating system, turns discovered flaws into working exploits, and uncovered a bug that had been hiding in OpenBSD for 27 years. Instead of opening access like it usually does with new models, Anthropic created a closed consortium — Project Glasswing — and distributed it to roughly 40 selected organizations.

The question I immediately asked myself: is this really a safety decision, or a market play dressed up as responsibility?

After three weeks of announcements, controversy, and an embarrassing breach, I have enough to try and answer that.

What Claude Mythos Can Do (and Why It's Scary)

Mythos is not a cybersecurity-specific model. It's a general-purpose model that turned out to be devastating at security tasks. The distinction matters: it wasn't specifically trained to find vulnerabilities. It finds them because it's capable enough to do so.

The numbers Anthropic published are impressive:

Metric Result
Expert-level CTF (success rate) 73%
Zero-day vulnerabilities found in Firefox 271
Working exploits generated from those vulnerabilities 181
Oldest bug discovered 27 years (OpenBSD)
SWE-bench Verified (coding) 93.9%

In one documented case, Mythos wrote a browser exploit chaining four separate vulnerabilities: a JIT heap spray that evaded both the renderer sandbox and the OS sandbox. Anthropic engineers with no security background asked Mythos to look for RCE (Remote Code Execution) overnight — and the next morning they had a working exploit.

The point isn't that Mythos is good at finding bugs. It's that it does so autonomously, on complex codebases, without step-by-step human guidance.

This is where the gap with Opus 4.7 or GPT-5.5 becomes clear. With those models you need to guide the analysis: point to suspicious files, craft specific prompts, iterate on the response. With Mythos the workflow is different — you hand it a codebase, ask it to find vulnerabilities, and it returns working exploits. Without you needing to know where to look.

Project Glasswing: Who's In and How It Works

Anthropic didn't simply say "we're not releasing it." They built a controlled-access infrastructure around Mythos and called it Project Glasswing.

The 12 Founding Partners

Twelve organizations sit at the main table:

Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks — and Anthropic itself.

They're not customers: they're operational partners. They use Mythos to hunt vulnerabilities in their own systems and in the open-source software the global infrastructure depends on.

The 40+ Organizations with Mythos Access

Beyond the founders, roughly 40 organizations that build or maintain critical software received monitored access. Anthropic proposed extending access to another 70 — bringing the total to about 120 — but the White House blocked the expansion on April 30, 2026.

The Economics

Anthropic put real money into this:

  • $100 million in usage credits for partners
  • $4 million in direct donations to open-source security organizations
  • Post-research pricing: $25/M input tokens, $125/M output tokens

For comparison, Claude Opus 4.7 costs $15/$75. Mythos costs nearly double — and you can't even buy it freely.

Technical Access

Approved partners reach Mythos through standard cloud channels: Claude API, Amazon Bedrock (us-east-1 region), Google Vertex AI, Microsoft Foundry. But access is filtered at the IAM policy level, with a complete audit trail on CloudTrail for every single API call.

Diagram of the access flow to Claude Mythos through Project Glasswing

It's not a model you download and run. It's a service with institutional guardrails.

The Claude Mythos Breach: Compromised on Announcement Day

The most embarrassing part of the story came immediately. On the very day of the announcement — April 7 — an unauthorized group gained access to Mythos.

The mechanism was almost trivial: an employee at a third-party Anthropic contractor used their system access to locate Mythos's protected environment. They leveraged information leaked from a previous data breach at Mercor (an AI training startup) to guess where the model was hosted. Then they shared access with a group on Discord.

As of April 23, according to Fortune, the group was still using the model.

A model declared "too dangerous for public release" was compromised by a contractor who guessed where it was.

This episode doesn't invalidate the decision to restrict access — if anything, it reinforces it. But it shows that the AI safety problem isn't just "who can use the model": it's "who can access the infrastructure hosting it."

The White House Blocks Project Glasswing's Expansion

On April 30, 2026, the Trump administration told Anthropic it opposed expanding Mythos access from 40 to 120 organizations. Three stated reasons:

National security. The day-one breach fueled concerns that broader access means more attack surface.

Computational resources. The administration worries that sharing Mythos with 70 additional companies could reduce the computational capacity available for government use.

Pre-existing tensions. In February 2026, Trump had already ordered the government to stop using Anthropic's technology, after the Pentagon designated Anthropic as a "national security supply chain risk."

The picture is more political than technical. Anthropic is in an uncomfortable position: it built the most powerful cybersecurity model, but the government that would benefit most doesn't trust the company that built it.

Timeline of key Project Glasswing events: announcement, breach, White House block

For those of us working in Europe, there's a different practical takeaway: no European company is among Glasswing's founding partners. India is negotiating access. Europe, for now, watches from the sidelines.

Claude Mythos vs GPT-5.5 vs Gemini: The Comparison That Matters

Mythos doesn't exist in a vacuum. It arrived two weeks before GPT-5.5's public release (April 23) and competes with Gemini 3.1 Pro. But the comparison is asymmetric: you can actually use the other two. Mythos, you can't.

Aspect Claude Mythos GPT-5.5 Gemini 3.1 Pro
Availability Glasswing only All paid tiers Available via API
Coding (SWE-bench) 93.9%
Cyber capabilities In a class of its own Lower Lower
Agentic (GDPval) 84.9%
Reasoning (ARC-AGI-2) 77.1%
Price (per M tokens) $25/$125 Variable by tier Variable by tier

The interesting point isn't who "wins" on benchmarks. It's that Anthropic chose not to compete in the consumer market with its best model. GPT-5.5 and Gemini 3.1 Pro aim for volume and adoption. Mythos aims for strategic impact.

It's the first time a frontier AI company has explicitly said: "This model is too powerful to sell as a service."

The Glasswing Precedent: Tiered Access to AI Models

Glasswing sets a new precedent in the AI market, and it's not necessarily a positive one.

Until now, the AI safety debate has focused on alignment — making sure models don't do bad things. Glasswing introduces a different concept: tiered access. The model isn't "aligned" to be safe — it's simply restricted to who can use it.

This raises concrete questions:

Who decides which organizations deserve access? Today, Anthropic decides. Tomorrow, perhaps an "independent third-party body" — which Anthropic itself has said it wants to create to govern Glasswing.

What happens when Mythos-class capabilities arrive in open-weight models? Anthropic has stated that Mythos capabilities will be integrated into a future Opus model once safeguards are validated. But if an open-source model reaches the same cyber capabilities without safeguards, Glasswing's containment becomes irrelevant.

Tiered access only works as long as no one else builds an equivalent model without restrictions.

What Project Glasswing Means for Developers

If you work with Laravel, PHP, or any web stack, you won't use Mythos directly — at least not anytime soon. But the effects on the security of the code you write every day already concern you.

The Patches Are Already Arriving

The vulnerabilities Glasswing finds get reported to maintainers. Firefox 150 included fixes for 271 vulnerabilities discovered by Mythos. If you use Firefox, Chrome, Linux, or any software maintained by a Glasswing partner, you're already benefiting indirectly from the project.

The Threat Model After Mythos Has Changed

If an AI model can find and exploit vulnerabilities autonomously, your code's security is no longer competing against human attackers alone. Running regular security audits becomes less optional: tools like Laravel Boost v2.4 (which includes a security audit skill for Claude Code) and the PHP Security Auditor skill let you run automated analysis for SQL injection, XSS, IDOR, and mass assignment.

# Example: security audit with Claude Code on a Laravel project
claude-code --skill php-security-auditor \
  --path ./app \
  --focus "sql-injection,xss,idor,mass-assignment"

It's not Mythos, but Opus 4.7 with the right skills already catches a lot.

Governance Will Become a Theme

If Glasswing becomes the standard model for distributing high-risk AI models, expect it to reach Europe — likely with the AI Act as the regulatory framework. For Italian companies working with critical infrastructure, preparing for compliance on this front is not premature.

Is Claude Mythos Hype or a Turning Point? The Open Debate

Not everyone is convinced Mythos is the revolution Anthropic claims. Bruce Schneier, cryptographer and security analyst, called the hype around Mythos "mostly marketing," arguing that smaller models can do similar things.

Simon Willison, on the other hand, wrote on announcement day that "restricting access seems necessary to me."

The truth is probably somewhere in between. The numbers Anthropic published — 271 zero-days in Firefox, 73% of CTFs solved — are only partially verifiable, because the model isn't accessible to independent researchers. Anthropic said it would publish results within 90 days, but for now we're taking their word for it.

What isn't hype is the regulatory and commercial precedent. Glasswing changes the rules: the discussion is no longer just about "what an AI model can do," but "who has the right to use it." And that's a question that concerns everyone — not just the 40 partners in the consortium.

The next milestone is Google I/O on May 19. If Google showcases its own AI cybersecurity capabilities with Gemini, will Anthropic's "restricted access" model withstand competitive pressure? Or will Project Glasswing become a luxury only Anthropic can afford — while the rest of the market finds its way with less powerful but universally accessible models?

Did you like the article? If it helped you, consider buying me a coffee for support.

This website uses cookies to ensure you get the best experience. By continuing to browse, you accept the use of cookies. See our cookie policy.