
The Integration Layer Is the New Lock-In
Four companies shipped four agent connector systems in three weeks. None interoperate. The integration layer, not the model, is the new lock-in.
Between February 5 and February 25, OpenAI, Anthropic, New Relic, and Salesforce each shipped their own proprietary agent connector system. All four reference open standards. None of their connector layers talk to each other. The battle for enterprise AI has quietly shifted: the model is a commodity, the integration layer is the moat.
Gartner projects that 40 percent of enterprise applications will embed AI agents by the end of 2026, up from less than five percent today. That is not a gentle adoption curve. That is a land rush, and land rushes reward whoever stakes the claim first, not whoever has the best tools. The enterprises making integration choices in the next six months will live with those choices for years, potentially decades. We have seen this movie before. It was called cloud computing, and it took most organizations the better part of a decade to understand what they had signed up for.
The cloud lock-in parallel is instructive but undersells the risk. Cloud lock-in trapped infrastructure. Agent lock-in traps process, institutional knowledge, workflow logic, and — increasingly — financial controls. The blast radius is categorically larger. This is precisely why AgentPMT's Dynamic MCP exists: a single integration point that works across Claude, OpenAI, Google, Microsoft, Cursor, VS Code, Windsurf, Zed, and any MCP-compatible agent. Tools and skills are fetched remotely and on demand. Nothing enters context until needed. The server binary is five megabytes, costs nothing to run, and auto-detects platforms. It is the architectural opposite of what the four major players shipped this month.
Four Walled Gardens in 20 Days
On February 5, OpenAI launched Frontier, an enterprise platform for building and managing AI agents. The headline feature was Business Context connectors — pipelines that ferry data from warehouses, CRMs, and internal applications into agents at runtime. HP, Intuit, Oracle, State Farm, and Uber were named as launch customers. OpenAI made a point of claiming compatibility with agents from Google, Microsoft, and Anthropic. The compatibility claim is technically accurate and strategically misleading. The connectors themselves are Frontier-proprietary. An enterprise that builds its data pipelines on Business Context connectors can theoretically swap the model underneath. But the plumbing — the expensive, time-consuming, compliance-sensitive plumbing — stays on OpenAI's platform. Compatibility with other models is a feature. Dependency on Frontier's connector layer is the product.
Nineteen days later, on February 24, Anthropic launched Claude Cowork. The approach was different in posture but identical in architecture. Cowork ships with ten pre-built departmental plugins spanning Finance, HR, Engineering, and Design. It includes Deep Connectors for Google Workspace and DocuSign, and a Finance plugin with real-time FactSet data integration. Under the hood, Cowork uses Model Context Protocol — MCP — the open standard Anthropic itself published. But the connector layer on top of MCP is proprietary. TechCrunch described Cowork as "a major opportunity to grow Anthropic's enterprise client base — and a significant threat to SaaS products." That framing is correct. The threat is not to SaaS products in general. The threat is to any SaaS product that does not control its own integration layer.
The same day Anthropic shipped Cowork, New Relic launched its Agentic Platform: a no-code agent builder with MCP support, an SRE Agent for incident management, visual drag-and-drop workflow tools, and OpenTelemetry integration for observability. Again, MCP is referenced. Again, the tooling and workflow orchestration layer sitting above the protocol is proprietary to New Relic's ecosystem.
Then, on February 25, Salesforce reported Q4 FY2026 earnings that made the financial case for agent lock-in more clearly than any product announcement could. But more on those numbers in a moment.
The pattern across all four launches is consistent. Each vendor references open standards — MCP in particular — as a foundation. Each then layers a proprietary abstraction on top of that foundation. The open standard provides credibility. The proprietary layer provides revenue. The combination provides lock-in that looks like openness.
AgentPMT's Dynamic MCP takes the opposite approach. Instead of layering proprietary connectors on top of MCP, it exposes the protocol directly. The tool catalog updates every thirty minutes without user intervention. One integration provides access to the largest marketplace of AI tools and skills available today — hundreds of tools, no proprietary connector layer, no context bloat from loading every tool at startup. Traditional MCP servers load their entire tool inventory into context at launch. Dynamic MCP fetches only what is needed, when it is needed. The server cost is zero dollars. The architectural philosophy is straightforward: the integration layer should be infrastructure, not a product.
The Seat Contraction Signal
Salesforce's Q4 numbers deserve close reading, not for what they celebrate but for what they reveal. Revenue hit $11.20 billion, representing 12 percent year-over-year growth — the fastest clip in two years. Adjusted earnings per share came in at $3.81 against expectations of $3.04, a significant beat. Agentforce, Salesforce's agentic AI platform, exceeded $800 million in annualized revenue during the quarter. By most traditional metrics, this was a strong quarter.
But the stock told a different story. Shares were already down 28 percent year-to-date entering the earnings call. FY2027 guidance of $45.8 billion to $46.2 billion implied 10 to 11 percent growth. Wall Street wanted more. The reason Wall Street wanted more is the reason this earnings report matters to anyone thinking about agent integration: industry analysts noted revenue attrition from seat count contraction. AI agents are displacing human-operated software seats. Salesforce is simultaneously the beneficiary of agentic AI adoption through Agentforce and the victim of it through reduced per-seat licensing in its core CRM products.
Marc Benioff allocated $50 billion for share buybacks, noting — with characteristic understatement — that "these are some low prices." He is not wrong about the stock. But the buyback also signals that Salesforce sees the seat contraction trend as structural, not cyclical. You do not allocate $50 billion to buybacks if you expect organic growth to do the work.
The collateral damage extended beyond Salesforce's own stock. IBM shares dropped 13 percent — the worst single-session decline since 2000 — after Anthropic published a blog post about Cobol modernization that the market read as an existential threat to IBM's legacy services business. Meanwhile, Salesforce completed its $8 billion acquisition of Informatica and booked an $811 million gain on its Anthropic investment. The acquisitions, the investments, the buybacks — they all point in the same direction. The enterprise software industry is consolidating around agent integration layers, and the companies doing the consolidating want those layers to be theirs.
The consulting firms are accelerating this dynamic. Accenture deployed a 30,000-person "Anthropic Business Group" to serve Fortune 500 clients. OpenAI established Frontier Alliances with McKinsey, BCG, Accenture, and Capgemini. These are not partnerships. They are distribution channels for lock-in. When a consulting firm deploys an agent architecture for a Fortune 500 client, the connectors, workflows, and integrations they build become the client's de facto standard. Switching costs compound with every workflow deployed.
This is where portable architecture stops being an abstraction and starts being a procurement requirement. AgentPMT's Workflow and Skills Builder produces workflows that are exportable, shareable, and remixable — and that work identically across every supported LLM. Write once, run anywhere. Chain tools, set conditional logic, define multi-step processes with clear task definitions, defined inputs and outputs, and explicit success criteria. When a workflow fails, you see exactly which step broke. When a consulting firm builds your agent workflows on a proprietary connector layer, you see an invoice.
The Governance Gap
On February 24 — the same day Anthropic and New Relic made their announcements — UC Berkeley's Center for Long-Term Cybersecurity published a 67-page Agentic AI Risk Profile that extends the NIST AI Risk Management Framework. The timing was coincidental. The relevance was not.
The Berkeley framework introduces six autonomy levels, L0 through L5, and treats agency as a spectrum rather than a binary property. The most significant conceptual contribution is the framing of risk as "an emergent property of autonomous systems, rather than solely a property of individual models." This distinction matters enormously in the context of integration lock-in. If risk is emergent — arising from the interaction between agents, tools, data sources, and environments — then governance cannot be confined to model-level controls. Governance must operate at the integration layer.
The framework identifies tool access as the primary risk vector for agentic systems. The specific risks enumerated include unauthorized privilege escalation, unintended goal pursuit, cascading compromises across connected systems, and self-replication. The recommended mitigations are familiar to anyone with a security background in agentic systems: least privilege access, sandboxed environments, continuous monitoring, and a new concept the report calls "agent cards" — standardized documentation for agent capabilities and constraints. As one panelist noted, "One of the ways to make sure you limit your risk is isolated environments and sandboxing." Security misconfiguration already ranks as the second-most significant threat on the OWASP Top 10 for 2025.
The Berkeley framework recommends proportional governance: oversight should scale with system autonomy. An L1 agent that summarizes documents needs less oversight than an L4 agent that executes multi-step financial transactions across connected systems. This is sensible. It is also nearly impossible to implement when your governance tooling is locked to a single vendor's platform. If your monitoring, logging, and access controls are all Frontier-proprietary or Cowork-proprietary, then your governance is locked in alongside your integrations. You cannot independently audit what you cannot independently observe. Effective agent observability demands platform independence.
AgentPMT addresses this directly. Every agent gets a dedicated wallet on Base blockchain with x402 and x402Direct payment capabilities. Budget controls operate at daily, weekly, monthly, and per-transaction levels. Transactions are stablecoin-denominated in USDC — no crypto volatility. The audit trail lives both on-chain and in the dashboard. Full request and response capture, workflow step tracking, and compliance-ready audit trails operate identically regardless of which LLM or platform the agent runs on. AgentPMT's agent credits system — 100 credits to one US dollar, charged only on successful tool calls — means cost governance is built into the execution layer, not bolted on after the fact. This is what platform-neutral governance looks like: observable, auditable, and portable.
What This Means For You
The cloud lock-in era taught enterprises an expensive lesson about the difference between technical compatibility and architectural dependency. You could always export your data from AWS. You could never easily export your infrastructure-as-code templates, your CI/CD pipelines, your monitoring configurations, your IAM policies, or the institutional knowledge embedded in all of them. The data was portable. The system was not.
Agent integration lock-in follows the same pattern but with a larger blast radius. The model is portable — every vendor now claims multi-model support. The connectors, workflows, governance configurations, and payment rails are not. Enterprises that commit to a single vendor's integration layer in 2026 will spend 2028 explaining to their boards why switching costs have made competitive bidding impossible. AgentPMT exists as the neutral layer: a single integration that spans every major platform, with governance and financial controls that do not belong to any model provider.
What to Watch
The Linux Foundation's Agentic AI Foundation, announced earlier this year, aims to establish open standards for agent interoperability. Whether it can move fast enough to matter before proprietary integration layers become entrenched is the central question.
Salesforce's Agentic Work Units pricing model — which charges per agent action rather than per seat — represents a fundamental shift in enterprise software economics. If this model scales, it will reshape how every SaaS vendor prices their product. The seat contraction signal from Q4 suggests it is already scaling.
OpenAI's consulting alliances with McKinsey, BCG, Accenture, and Capgemini are creating a de facto integration standard through deployment volume rather than technical merit. When the four largest consulting firms all build on the same connector layer, that layer becomes the standard regardless of what any standards body publishes.
And Berkeley's L0-L5 autonomy framework bears watching for regulatory uptake. If regulators adopt it — or something derived from it — as the basis for agentic AI governance requirements, the enterprises with platform-neutral governance tooling will have a significant compliance advantage over those locked into single-vendor monitoring.
The Architecture Decision
The next six months will determine the integration architecture for enterprise AI for the next decade. Four vendors have placed their bets on proprietary connector layers wrapped in open-standard language. The consulting firms are deploying those layers at scale. The financial incentives — Agentforce's $800 million run rate, Salesforce's $8 billion Informatica acquisition, Accenture's 30,000-person deployment — are enormous and accelerating.
The counter-position is architectural neutrality: integration layers that do not belong to any model provider, governance that operates identically across platforms, and financial controls that live on auditable infrastructure rather than proprietary dashboards. That is what AgentPMT's Dynamic MCP provides. The enterprises that build on neutral infrastructure now will be the ones with actual choices later.
Key Takeaways
- Integration, not the model, is the new lock-in. OpenAI, Anthropic, New Relic, and Salesforce all shipped proprietary connector layers on top of open standards in a 20-day window. The models are increasingly interchangeable; the integration plumbing is not.
- Seat contraction is structural. Salesforce's Q4 earnings reveal AI agents displacing human-operated software seats. Enterprise software economics are shifting from per-seat to per-action pricing, and the vendor that controls the integration layer controls the pricing.
- Governance locked to a platform is not governance. UC Berkeley's Agentic AI Risk Profile frames risk as emergent from system interactions, not individual models. Platform-neutral audit trails, budget controls, and monitoring are prerequisites for meaningful oversight.
- Architectural neutrality is a procurement requirement. AgentPMT's Dynamic MCP, portable workflows, and on-chain agent wallets provide the vendor-neutral integration layer that the current market consolidation demands.
Sources
- Anthropic Launches Claude Cowork with Enterprise Plugins — TechCrunch
- Salesforce Q4 FY2026 Earnings Report — CNBC
- OpenAI Launches Frontier Enterprise Platform — TechCrunch
- OpenAI Establishes Frontier Consulting Alliances — CNBC
- New Relic Launches Agentic AI Platform — TechCrunch
- Agentic AI Risk Management Standards Profile — UC Berkeley Center for Long-Term Cybersecurity
- Enterprise AI Agent Adoption Forecast — Gartner
- Top 10 Security Risks 2025 — OWASP