
Microsoft Open-Sources AI Agent Governance Toolkit
Microsoft released a seven-package, MIT-licensed toolkit that addresses all 10 OWASP agentic AI risks with sub-millisecond policy enforcement.
Microsoft Open-Sources AI Agent Governance Toolkit
Microsoft released the Agent Governance Toolkit on April 2, an open-source system designed to enforce runtime security policies across autonomous AI agent frameworks. The toolkit is the first to address all ten risks identified in the OWASP Agentic AI Top 10, published in December 2025.
The system comprises seven independently installable packages spanning Python, TypeScript, Rust, Go, and .NET. At its core, Agent OS acts as a stateless policy engine that intercepts every agent action before execution, operating at sub-millisecond latency with a p99 under 0.1 milliseconds. Agent Mesh provides cryptographic agent identity using decentralized identifiers and introduces the Inter-Agent Trust Protocol for secure agent-to-agent communication with dynamic trust scoring on a 0-1000 scale.
The toolkit integrates directly with popular agent frameworks including LangChain, CrewAI, OpenAI Agents SDK, Haystack, and LlamaIndex through native extension points. As Principal Group Engineering Manager Imran Siddique noted, the design prioritizes working with existing frameworks rather than requiring code rewrites. The project ships with over 9,500 tests, continuous fuzzing, and SLSA-compatible build provenance.
The release arrives at a critical moment. EU AI Act high-risk obligations take effect in August 2026, Colorado AI Act enforcement begins in June, and organizations are deploying AI agents faster than governance infrastructure can keep up. Microsoft has signaled plans to transition the project to foundation governance, engaging with OWASP and the LF AI and Data Foundation to ensure community-driven development.
Sources
- Introducing the Agent Governance Toolkit: Open-source runtime security for AI agents -- Microsoft Open Source Blog

