# Who Gets the Most Powerful AI? Four Labs Just Gave Four Answers

> In one week, Anthropic restricted its Mythos model to a security consortium, Meta launched its first proprietary model, Google released Gemma 4 under Apache 2.0, and OpenAI introduced identity-verified tiered access for GPT-5.4-Cyber. The four decisions represent four incompatible strategies for distributing frontier AI, and the infrastructure that abstracts away provider differences becomes the critical enterprise investment.

Content type: article
Source URL: https://www.agentpmt.com/articles/who-gets-the-most-powerful-ai-four-labs-just-gave-four-answers
Markdown URL: https://www.agentpmt.com/articles/who-gets-the-most-powerful-ai-four-labs-just-gave-four-answers?format=agent-md
Updated: 2026-04-15T06:58:55.953Z
Author: Stephanie Goodman
Tags: MCP, OpenAI, AI Powered Infrastructure, Enterprise AI Implementation, Security In AI Systems, News

---

# Who Gets the Most Powerful AI? Four Labs Just Gave Four Answers

Between April 2 and April 14, Anthropic, Meta, Google, and OpenAI each answered the same question — who should have access to their most capable AI — and none of them agreed. Anthropic locked its Mythos model inside a consortium of roughly 40 organizations after it found thousands of high-severity vulnerabilities across every major operating system and browser. Meta shipped Muse Spark, its first proprietary model, abandoning the open-weight Llama strategy. Google released Gemma 4 under Apache 2.0, the most permissive license it has ever used. And OpenAI launched GPT-5.4-Cyber with identity-verified tiered access, letting vetted security professionals unlock capabilities that remain blocked for everyone else.

Each decision reflects a different commercial bet on how AI capability should flow from lab to user — and each one reshapes what enterprises can actually build.

## What Anthropic Decided — and Why Wall Street Noticed

Anthropic's move was the most dramatic. Claude Mythos Preview found vulnerabilities that had survived decades of human review and millions of automated security tests, including a long-standing flaw in OpenBSD, a system built specifically for security. The model's exploit development proved sophisticated enough that Anthropic concluded broad release was too dangerous. The vast majority of vulnerabilities Mythos discovered remain unpatched — defenders simply cannot fix them as fast as the model finds them. The scale of what Mythos exposed underscores what many in the industry already recognize as [an agentic AI security crisis](https://www.agentpmt.com/articles/the-agentic-ai-security-crisis-is-here-most-organizations-aren-t-ready) — one where the tools themselves can become vectors if access is not carefully controlled.

The response was Project Glasswing — a security consortium including Amazon Web Services, Apple, Google, Microsoft, JPMorgan Chase, CrowdStrike, Palo Alto Networks, and NVIDIA, among others. Anthropic committed up to $100 million in usage credits and donated separately to open-source security organizations through the Linux Foundation and Apache Software Foundation. The consortium's mandate is specific: local vulnerability detection, black box testing of compiled binaries, endpoint security, and penetration testing. Anthropic plans to publish a public report on vulnerabilities fixed within 90 days.

The financial world took this seriously. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an emergency meeting at Treasury headquarters, bringing in the CEOs of Citigroup, Morgan Stanley, Bank of America, Wells Fargo, and Goldman Sachs. Jamie Dimon, unable to attend, had already stated that cybersecurity "remains one of our biggest risks" and that "AI will almost surely make this risk worse." When the Treasury Secretary and the Fed Chair pull bank CEOs into an unscheduled room over a single AI model, the capability question has moved past the benchmark charts. The same dynamics that make [MCP tools a potential threat vector](https://www.agentpmt.com/articles/when-your-mcp-tools-become-the-threat-vector) apply at scale when frontier models can autonomously discover and exploit software flaws.

Anthropic does not plan to make Mythos Preview generally available. The restricted access is not a launch phase — it is the distribution model.

## Meta Breaks Its Own Playbook

After years of positioning Llama as the industry's default open-weight model family — built into one of the largest AI distribution networks in the industry — Meta released Muse Spark as entirely proprietary. No open weights. Access limited to a private API preview for select partners.

The shift followed a massive investment and a nine-month rebuild of Meta's AI infrastructure under Alexandr Wang at the newly formed Meta Superintelligence Labs. Wang described it as "step one," with larger models in development and plans to open-source future versions. But the immediate signal was clear: Meta concluded that competing at the frontier required controlling distribution.

Muse Spark is a natively multimodal reasoning model with three interaction modes — Instant, Thinking, and Contemplating — plus tool use, visual chain-of-thought processing, and multi-agent orchestration. Its benchmark performance landed in the middle of the pack, behind Claude Opus 4.6, Gemini 3.1 Pro, and GPT-5.4 on the AI Intelligence Index, though it led all competitors on health-related reasoning tasks. The model is rolling out across Facebook, Instagram, WhatsApp, Messenger, and Ray-Ban AI glasses — reaching Meta's global user base through surfaces Meta already controls. The strategy is not to win on benchmarks. It is to own the distribution.

The timing matters. Alibaba made the same move the same week, shipping three proprietary models in three days and stepping back from its own open-source Qwen strategy. When two of the three largest open-source AI contributors go proprietary in the same week, the open-weight model landscape is shifting beneath developers who built on it.

## Google Goes Maximally Open

Google took the opposite approach with Gemma 4, releasing it under Apache 2.0 — the most permissive license in the Gemma line's history. Earlier Gemma versions had carried custom licenses with restrictions that discouraged enterprise adoption. Apache 2.0 removes those barriers entirely.

The performance gains were dramatic. Gemma 4 jumped from 20.8% to 89.2% on the AIME math benchmark compared to its predecessor — the kind of generational leap that makes the licensing decision commercially significant, not just philosophically interesting.

Google's calculation is strategic: open-source models that carry Google's DNA drive adoption on Google Cloud. If enterprises self-host Gemma, they are more likely to scale on Google's infrastructure. The model is free. The compute to run it is not. And by going maximally permissive while Meta and Alibaba pull their models behind proprietary walls, Google positions Gemma as the safe bet for organizations that need open weights they can count on staying open.

## OpenAI Verifies the User, Not the Model

OpenAI's answer arrived on April 14 with GPT-5.4-Cyber and an expanded Trusted Access for Cyber program. The model is a variant of GPT-5.4 fine-tuned for defensive cybersecurity, with fewer restrictions on vulnerability research and a new binary reverse engineering capability — the ability to analyze compiled software without source code access. It is explicitly "cyber-permissive" for users who pass identity verification.

The tiered system works like this: users verify credentials, and higher verification levels unlock progressively more powerful capabilities. Individual security professionals verify at a dedicated portal. Enterprises request access through their OpenAI representative. The highest tier grants access to GPT-5.4-Cyber itself. OpenAI's earlier models had sometimes refused to answer dual-use cyber queries from legitimate security researchers — a friction point that GPT-5.4-Cyber is designed to eliminate for verified users.

Fouad Matin, a cyber researcher at OpenAI, framed it directly: "This is a team sport. We need to make sure that every single team is empowered to secure their systems. No one should be picking winners and losers in cybersecurity."

Where Anthropic selected roughly 40 organizations for Mythos access, OpenAI is building a verification system designed to scale to thousands of individual defenders and hundreds of teams. The philosophy differs at the root: Anthropic restricts the model; OpenAI restricts the user.

## What Enterprises Actually Face Now

The practical consequence for any organization running agentic AI is that model access has become a strategy question, not just a capability question. Choosing Anthropic's restricted consortium means applying for access and operating within Glasswing's terms. Choosing Meta's proprietary model means building within Meta's ecosystem. Choosing Google's open-source path means accepting self-hosting responsibility. Choosing OpenAI's tiered system means maintaining identity verification overhead. For a concise breakdown of how [four AI labs split on model access this week](https://www.agentpmt.com/articles/four-ai-labs-split-on-model-access-this-week), see the summary.

These are not interchangeable options. Each access model creates a different developer experience, a different procurement process, and a different set of assumptions about what your AI infrastructure can do six months from now. The organizations navigating this fragmentation also face a parallel challenge on [the payment rails that connect agents to services](https://www.agentpmt.com/agent-payments) — where interoperability is equally unsettled.

The Model Context Protocol, originally developed by Anthropic and now governed by the Linux Foundation's Agentic AI Foundation, provides a tool interoperability layer across models. But MCP solves tool access, not model access. An enterprise using MCP-compatible infrastructure still manages separate relationships with each provider. Research has documented that [static MCP servers can waste the vast majority of an agent's context window on unused tool definitions](https://www.agentpmt.com/articles/mcp-servers-waste-96-of-agent-context-on-tool-definitions) — a problem that compounds when enterprises run agents across multiple providers simultaneously. The agent infrastructure tier — the platforms that connect models to tools and manage business automation AI workflows across providers — becomes the critical investment when the model layer itself is fragmenting.

AgentPMT's cross-platform approach addresses this directly. Its [Dynamic MCP](https://www.agentpmt.com/dynamic-mcp) implementation fetches tools on demand regardless of which model powers the agent, so workflows built today keep working when enterprises shift between restricted, proprietary, open-source, or tiered-access models. When the model layer fractures, the infrastructure that abstracts away provider differences is the part that holds.

## The Access Question Is Not Converging

The capital behind these decisions suggests they are structural, not temporary. OpenAI, Meta, and Anthropic have each made nine- and ten-figure commitments to their respective strategies this year alone. These are infrastructure investments that lock each company into its access philosophy for years. The pace of competing infrastructure commitments — visible in how quickly [new payment rails and governance frameworks](https://www.agentpmt.com/articles/seven-payment-rails-in-fourteen-days-who-controls-the-agent-economy) emerged this quarter — confirms that the industry is building permanent divergence, not temporary experiments.

As of this week, every major AI lab has a different position on who should get access to frontier capability — and those positions follow from business models and competitive pressures that are not converging. Every organization building specialized AI tools or deploying AI cybersecurity systems needs infrastructure that does not bet on any single answer winning.

* * *

## Sources

-   Anthropic's Mythos reveals a growing security gap — Fortune
-   OpenAI rolls out tiered access to advanced AI cyber models — Axios
-   Trusted access for the next era of cyber defense — OpenAI Blog
-   OpenAI Releases Cyber Model to Limited Group in Race With Mythos — Bloomberg
-   Bessent and Powell convened Wall Street CEOs to address Anthropic's Mythos model — Fortune
-   Anthropic's Mythos is a wake-up call — Fortune
-   Anthropic's new Mythos AI tool signals a new era for cyber risks — CS Monitor
-   Meta Platforms Finally Releases Muse Spark — 24/7 Wall St
-   Anthropic Unveils Project Glasswing — HPCWire
-   OpenAI launches GPT-5.4-Cyber model — SiliconAngle
-   Did Meta Sacrifice Its Open-Source Identity? — AI News