On March 11, the U.S. Commerce Secretary is required to deliver a report identifying which state AI laws are "burdensome" enough to warrant federal preemption. Two days before that, the comment period closes on NIST's first-ever formal inquiry into securing AI agent systems. And on February 12, Anthropic dropped $20 million on political candidates who favor AI regulation — a direct counter to the $125 million "Leading the Future" PAC backed by tech executives pushing to eliminate state-level rules entirely.
Three governance forces are colliding simultaneously, and none of them agree on what to do. The federal government wants centralization — a December 2025 executive order, a DOJ task force created January 9 to challenge state laws, and the Commerce Department's March 11 evaluation. States aren't waiting. California's SB 53 went live January 1 requiring frontier model transparency. New York signed the RAISE Act with 72-hour incident reporting. Texas launched RAIGA with a regulatory sandbox. Colorado's AI Act is delayed but intact for June 30. Meanwhile, Singapore published the world's first governance framework written specifically for agentic AI — acknowledging that existing rules designed for text-in/text-out systems don't map to agents that choose tools, execute multi-step workflows, and spend money autonomously.
This is why governance needs to be infrastructure, not an afterthought. Companies building agent governance into their platform today — with per-tool cost tracking, budget enforcement, audit trails, and credential isolation — are the ones that will be compliant no matter what comes out of Washington, Sacramento, or Albany. AgentPMT's architecture was designed for exactly this regulatory uncertainty: compliance-ready audit trails where every transaction is recorded with timestamps, parameters, costs, and outcomes, combined with budget enforcement that works regardless of which framework prevails. The agents are already live — BNY Mellon has 20,000 with their own credentials, Goldman Sachs embedded Anthropic engineers for six months to automate compliance, and 80% of Fortune 500 companies are deploying AI agents, per Microsoft's February 10 Cyber Pulse report. The governance isn't keeping pace. And the regulatory landscape is about to shift dramatically in the next 24 days.
The Federal Preemption Push: Washington Wants One Rulebook
The Trump administration's approach to AI governance can be summarized in five words: tear down the state laws. On December 11, 2025, President Trump signed an executive order directing the Commerce Secretary to identify "burdensome" state AI regulations by March 11. On January 9, Attorney General Pam Bondi announced the AI Litigation Task Force — chaired by the Attorney General herself, advised by White House AI and crypto czar David Sacks, and staffed with representatives from the Civil Division and Solicitor General's office. The task force's mandate, per the internal DOJ memo reviewed by CBS News, is to challenge state laws as unconstitutional regulation of interstate commerce or as preempted by existing federal regulations. David Sacks stated the order "will provide the tools necessary for the federal government to push back against the most onerous and excessive state regulation."
The legislative arm is moving in parallel. Senator Marsha Blackburn's AMERICA AI Act — the "Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act" — is the most ambitious federal preemption bill yet. It would require Training Data Use Records, mandate bias audits for high-risk systems, and explicitly preempt state laws regulating frontier model risk management. The bill establishes a Duty of Care enforceable by the FTC and includes catastrophic risk protocols with mandatory Department of Homeland Security reporting. It reads like comprehensive regulation, but it also reads like a bill, not a law.
Meanwhile, the FTC has signaled it has no appetite for AI-specific rulemaking. In December 2025, the Commission voted 2-0 to vacate its 2024 consent order against Rytr, an AI writing assistant, concluding that the original complaint "failed to adequately allege a violation of Section 5." FTC Director Christopher Mufarrige was blunt: "condemning a technology or service simply because it potentially could be used in a problematic manner is inconsistent with the law and ordered liberty." The administration is also exploring financial leverage — potentially conditioning $21 billion in BEAD broadband funds on states avoiding "onerous" AI regulations, with the Commerce Secretary's analysis due March 16.
Federal preemption sounds clean on paper — one rulebook instead of fifty. But right now, there is no federal rulebook. The executive order directs agencies to evaluate and challenge state laws, not to replace them with federal standards. The AMERICA AI Act is a proposal. The FTC just said it won't do AI-specific rulemaking. Federal preemption as currently constituted means tearing down state rules while offering nothing in their place. This governance vacuum is exactly why AgentPMT built compliance-ready infrastructure into the platform from day one. When federal rules arrive, AgentPMT-governed agents will already have the audit data regulators will demand — every tool call logged with full request/response capture, workflow step tracking, and structured audit trails. When state rules differ, AgentPMT's per-tool cost tracking and workflow step logging give enterprises the documentation to demonstrate compliance in any jurisdiction.
Five Jurisdictions, Five Different Rules
While Washington debates preemption, states are building the regulatory framework that actually governs AI agents today.
California's SB 53, effective January 1, 2026, requires frontier model developers with $500 million or more in annual revenue to publish safety framework documentation, conduct red-teaming, maintain whistleblower protections, and report critical safety incidents within 15 days. New York's RAISE Act, signed by Governor Hochul on December 20, 2025, goes further: 72-hour incident reporting — one of the most contentious provisions during negotiations — a new AI oversight office within the Department of Financial Services, and penalties of up to $1 million for first violations and $3 million for subsequent violations. The law takes effect January 1, 2027, targeting frontier models and developers with $500 million or more in annual revenue.
Texas's RAIGA, effective January 1, 2026, takes a different approach: consumer protections paired with a regulatory sandbox program that lets companies test AI systems with reduced regulatory risk. Illinois's H.B. 3773, also effective January 1, prohibits using ZIP codes in AI models for candidate evaluation and provides employees a private right of action against AI-driven discrimination. Colorado's AI Act — delayed from February 1 to June 30, 2026, after more than 150 lobbyists clashed during a special legislative session — mandates impact assessments, consumer disclosures, and risk management programs for high-stakes decisions in employment, housing, loans, and healthcare. As Representative Brianna Titone put it during the Colorado debate: "Big tech companies do not want to come to the table — they do not want compromise, they do not want any liability."
Thirty-eight states passed AI-related legislation in the last year, per NBC News. Forty-two state attorneys general sent letters to major AI companies on December 10, 2025, requesting pre-release safety testing, independent audits, and incident logging. The fragmentation is real. But it's also creating accidental innovation. Texas's sandbox is testing ideas the federal government hasn't proposed. New York's 72-hour reporting window is stricter than California's 15-day window, which means companies meeting the New York standard are automatically compliant everywhere else. Smart companies aren't waiting for regulatory clarity — they're building to the strictest standard and treating the surplus compliance as a competitive advantage.
AgentPMT's auditable-everything architecture is built for multi-jurisdictional compliance. The platform's full request/response capture, workflow step tracking, and persistent session logging provide the documentation companies need whether the reporting window is 72 hours, 15 days, or whatever the federal standard eventually becomes. Every agent interaction is logged with complete context — what ran, what succeeded, what failed, and exactly where. Companies using AgentPMT can produce compliance documentation for any jurisdiction on demand because the audit trail exists regardless. If your agents operate across state lines — and most do — you need infrastructure that logs everything by default. The cost of retroactively adding logging is far higher than building with it from the start.
The Agent-Specific Governance Gap: Existing Rules Weren't Written for This
Every regulation discussed so far shares one fundamental problem: they were written for AI systems that take text in and produce text out. They don't address agents that choose tools, interpret outputs, recover from errors, and spend money autonomously. Singapore recognized this first.
On January 22, 2026, Singapore's Minister for Digital Development Josephine Teo announced the Model AI Governance Framework for Agentic AI at the World Economic Forum in Davos — the world's first governance framework written specifically for autonomous AI systems. Developed by the Infocomm Media Development Authority, the framework identifies four governance dimensions: bounding risks upfront by limiting agent autonomy and data access, ensuring human accountability through clear responsibility allocation and approval checkpoints, implementing technical controls via sandboxing and continuous monitoring, and promoting end-user responsibility through transparency and training. The framework is voluntary, but organizations remain legally accountable for their agents' behaviors and actions. As April Chin, co-CEO of Resaro, noted: "The MGF fills a critical gap in policy guidance for agentic AI" and "helps organisations define agent boundaries, identify risks, and implement mitigations."
The U.S. federal government is now playing catch-up. On January 8, NIST's Center for AI Standards and Innovation published its first-ever Request for Information specifically targeting agentic AI security. The RFI defines AI agents as systems "capable of taking actions that affect external state" — creating persistent changes such as modifying permissions, creating credentials, rotating keys, or changing policies. NIST identified specific threats: indirect prompt injection, data poisoning, backdoor attacks, specification gaming, and agent hijacking. The comment deadline is March 9, and responses will inform federal technical guidelines for agent security. This isn't an abstract exercise. NIST has already conducted initial evaluations of agent hijacking and determined that continuously improving shared evaluation frameworks is a critical priority.
The gap between deployment and governance is staggering. Microsoft's Cyber Pulse report found 80% of Fortune 500 companies deploying AI agents, with 29% of employees using unsanctioned "shadow agents" for work tasks. As Microsoft Corporate Vice President Vasu Jakkal wrote: "AI agents are scaling faster than some companies can see them — and that visibility gap is a business risk." Deloitte finds only 21% of organizations have mature governance. The OWASP MCP Top 10 is now published. And 36.7% of MCP servers remain vulnerable to server-side request forgery.
Then there's the political dimension. On February 12, Anthropic donated $20 million to Public First Action, a bipartisan advocacy group backing 30 to 50 congressional candidates who favor AI regulation. Public First Action plans to raise $50 million to $75 million total. The group's priorities: giving the public more visibility into AI companies, opposing federal preemption without strong federal standards, export controls on AI chips, and regulation of high-risk applications like AI-enabled biological weapons. Anthropic's statement was direct: "At present, there are few organized efforts to help mobilize people and politicians who understand what's at stake in AI development. Instead, vast resources have flowed to political organizations that oppose these efforts."
The opposing force: Leading the Future, a super PAC backed by Andreessen Horowitz, OpenAI co-founder Greg Brockman, and venture capitalists Joe Lonsdale and Ron Conway, has raised more than $125 million to support candidates favoring lighter regulation. The biggest AI lab in the room just bet $20 million that regulation is coming — and coming from people who understand the technology. When the company building Claude thinks governance is existentially necessary, the signal is clear. The question is whether governance infrastructure catches up to deployment speed, or whether the 29% of agents operating as unsanctioned shadow AI become the next headline.
Singapore's framework identifies four governance dimensions — bounding risks, human accountability, technical controls, and end-user responsibility. AgentPMT's architecture maps directly to each one. Budget controls and vendor whitelisting bound risks upfront. Human-in-the-loop communication — where agents message humans mid-workflow and humans respond via the mobile app — ensures accountability. Encrypted credential vaults and per-transaction budget enforcement provide technical controls. The dashboard and mobile app put end-user responsibility in the operator's hands. AgentPMT isn't waiting for regulators to define agent governance — the platform already implements the patterns Singapore codified.
What This Means For You
The next 24 days will shape the regulatory landscape for years. Regardless of the outcome, companies deploying AI agents cannot wait for clarity. The compliance burden will be retroactive. Companies that have audit trails, spending controls, and incident documentation from day one will be positioned to comply with whatever rules emerge. Companies that don't will be retrofitting at enormous cost.
If your agents operate across states, build to the strictest standard — currently New York's 72-hour incident reporting. This gives you compliance surplus in every other jurisdiction and demonstrates good faith to regulators. Submit comments to NIST's CAISI RFI before March 9 if you have perspective on AI agent security. Start documenting your agent governance framework now — Singapore's voluntary framework is a template for what becomes mandatory.
AgentPMT was built for regulatory uncertainty. The platform's governance architecture — per-tool cost tracking, multi-budget system, spending caps with hard server-side enforcement, full request/response logging, workflow step tracking, compliance-ready audit trails, credential isolation, vendor whitelisting — provides the compliance infrastructure enterprises need regardless of which regulatory framework prevails. You deploy AI agents with confidence because you define the boundaries and the system enforces them.
What to Watch
March 9, 2026: NIST CAISI comment period closes on AI agent security RFI. Responses will inform federal technical guidelines for securing autonomous AI systems.
March 11, 2026: Commerce Department delivers its evaluation of state AI laws to the President. This will signal which states face federal legal challenges and whether the administration pursues funding leverage through BEAD broadband conditions.
June 30, 2026: Colorado AI Act implementation date — delayed but all obligations intact, including impact assessments and consumer disclosures for high-risk AI systems.
August 2, 2026: EU AI Act high-risk system enforcement scheduled, though the Digital Omnibus package may push this to December 2027 — contingent on harmonized standards approval.
January 1, 2027: New York RAISE Act becomes effective, with the nation's strictest 72-hour incident reporting window and penalties up to $3 million.
Anthropic's political investment: Watch whether $20 million shifts the midterm conversation. If pro-regulation candidates win, expect accelerated governance legislation in 2027. Combined with Leading the Future's $125 million, total AI political spending could exceed $175 million this cycle.
The regulatory fight over AI agents isn't abstract — it's happening on specific dates with specific deadlines and specific dollars behind it. March 9 and March 11 are when the federal government shows its hand. What happens between now and then determines whether agent governance becomes coherent infrastructure or a compliance scramble. The companies that built governance into their agent architecture from day one — per-tool cost tracking, budget enforcement, full audit trails, credential isolation — won't be scrambling when the rules arrive. Explore AgentPMT's governance architecture today.
Key Takeaways
- Two federal deadlines — March 9 (NIST AI agent security comments close) and March 11 (Commerce Department evaluates state AI laws) — will define the U.S. approach to AI agent governance for the foreseeable future.
- Anthropic's $20M bet on pro-regulation candidates directly counters the $125M anti-regulation PAC, splitting the AI industry on governance for the first time at this scale.
- Build to the strictest standard now (New York's 72-hour incident reporting) — retroactive compliance is always more expensive than governance built from day one.
Sources
DOJ Creates Task Force to Challenge State AI Regulations - CBS News
Navigating the Emerging Federal-State AI Showdown: DOJ Establishes AI Litigation Task Force - BakerHostetler
Inside the DOJ's New AI Litigation Task Force - Baker Botts
The TRUMP AMERICA AI Act: Federal Preemption Meets Comprehensive Regulation - National Law Review / Jones Walker
The FTC Walks Back Its Rytr Enforcement Action - All About Advertising Law / Lewis Rice
Emerging Federal AI Strategy: FTC Sets Aside Rytr Consent Order - Mintz
New York Governor Signs Sweeping AI Safety Law - Fisher Phillips
New York's AI Safety Law Claims National Alignment but Delivers Fragmentation - Center for Data Innovation
Several State AI Laws Set to Go into Effect in 2026, Despite Federal Government's Push - Lexology / Barnes & Thornburg
Colorado's AI Law Delayed Until June 2026 - Clark Hill PLC
Singapore Debuts World's First Governance Framework for Agentic AI - Computer Weekly
Singapore Launches New Model AI Governance Framework for Agentic AI - IMDA
CAISI Issues Request for Information About Securing AI Agent Systems - NIST
Federal Register: RFI Regarding Security Considerations for AI Agents - Federal Register / NIST
Anthropic Gives $20 Million to Group Pushing for AI Regulations - CNBC
Anthropic Pledges $20 Million to Candidates Who Favor AI Safety - Bloomberg
Anthropic Pours $20 Million Into AI Policy Fight - Axios
80% of Fortune 500 Use Active AI Agents - Microsoft Security Blog
A Field Guide to 2026 Federal, State and EU AI Laws - The New Stack
From Guardrails to Governance: A CEO's Guide for Securing Agentic Systems - MIT Technology Review / Protegrity
Singapore's New Model AI Governance Framework for Agentic AI - K&L Gates
NIST Asks Public for Help Securing AI Agents - Cybersecurity Dive
New Laws in 2026 Target AI and Deepfakes - NBC News
EU Digital Omnibus Proposes Delay of AI Compliance Deadlines - OneTrust
