
1,100 Government AI Use Cases Under a Four-Page Federal Plan
The White House released a four-page National AI Policy Framework proposing to preempt all state AI laws, while federal agencies already operate over 1,100 active AI use cases with thin governance — vendor self-evaluation, missed reporting deadlines, and a modernization budget Congress cut from $45 million to $8 million.
1,100 Government AI Projects Run Under a 4-Page Federal Plan
On March 20, the White House released its National AI Policy Framework — a document authored by science adviser Michael Kratsios and AI/crypto adviser David Sacks that proposes to replace every state AI law in the country with a single federal standard. The entire framework runs four pages.
Federal agencies, meanwhile, are operating more than 1,100 active AI use cases. The governance infrastructure meant to manage all of that relies on an OMB memo that lets vendors evaluate their own performance, a use-case inventory that missed its reporting deadline, and a modernization office whose budget Congress gutted before it could get started.
What follows is a look at what the framework proposes, what agencies are actually deploying, and where the distance between the two should concern anyone whose permits, benefits, or safety depend on government AI getting it right.
The Four-Page Framework
The framework fulfills a directive from Executive Order 14365, signed in December 2025. Its central recommendation: Congress should preempt state AI laws that impose "inconsistent or undue burdens" on developers. Under the proposal, states would lose authority to regulate AI model development — characterized as inherently interstate commerce — or to hold developers liable when third parties misuse their models.
Seven priority areas anchor the document: children's safety, community effects, copyright, government censorship, federal regulation, workforce, and state preemption. The framework calls for regulatory "sandboxes" granting exemptions for up to ten years and opposes creating a new federal AI regulatory body. Existing sector-specific agencies would handle oversight — but the document does not specify how, with what resources, or under what enforcement authority.
Patrick Hedger of NetChoice called for a "light-touch regulatory environment." Brad Carson of Americans for Responsible Innovation warned the framework gives "another chance for tech companies to launch harmful products with no accountability."
Both responses track existing battle lines. What matters most about the framework, though, is its legislative track record: the same preemption language was removed from the GOP budget reconciliation bill and excluded from the defense policy bill. This is the administration's third attempt. Congress has not acted on the first two.
House leadership endorsed the proposal immediately. Senator Marsha Blackburn released a competing draft, stating she had worked to develop legislation that could "garner bipartisan support and accomplish the President's goals." Neither chamber has moved preemption to a vote.
What Federal Agencies Are Running
While the White House proposes rules for the private sector, federal agencies are deep into their own AI deployments — with considerably less governance structure than the framework envisions for everyone else.
Gallup's most recent survey found that 43% of public-sector employees now use AI at least a few times per year. The public sector has pulled slightly ahead of the private sector on overall adoption, but trails meaningfully on strategic readiness: a minority of public-sector organizations report having a clear AI strategy, compared to a majority in the private sector. The tools arrived well before the plans did.
The specific deployments show the operational footprint. GSA launched GSAi, a generative AI chatbot powered by models from Anthropic and Meta, with plans to expand it across the agency. The Defense Department's GenAI.mil platform serves millions of military and civilian personnel. The Department of Education feeds program spending data into AI on Microsoft Azure. DOGE's CamoGPT scans Army records, and its AutoRIF tool assists with workforce reduction decisions.
GSA's acting administrator, Stephen Ehikian, compared the rollout to "giving a personal computer to every worker." The analogy is more revealing than intended. Personal computers arrived decades before organizations understood data governance, information security, or acceptable use policies. Government AI is following the same sequence — broad adoption first, institutional controls later, consequences in between.
The multi-model reality compounds the governance challenge. Agencies are pulling from Anthropic, Meta, OpenAI, and Microsoft through separate procurement channels with different logging requirements, different terms, and different oversight mechanisms. Platforms that offer cross-platform agent orchestration — like AgentPMT, whose Dynamic MCP server routes agent traffic through a unified interface regardless of the underlying model vendor — solve this fragmentation by centralizing the audit trail and access controls that each agency would otherwise have to build and maintain separately.
At the state and local level, government automation is expanding into services that directly touch citizens. Agentic AI systems now handle AI permit processing tasks in the government and public sector — guiding applicants through forms, routing service requests, and assessing benefits eligibility. Los Angeles trained its entire city workforce on responsible AI use before deploying tools broadly. New York's MTA partnered with Google to use agentic AI for rail defect detection — a public safety AI application where the cost of a missed defect is not measured in dollars.
Where Oversight Falls Short
OMB Memo M-26-04, issued December 2025, requires that large language models used by federal agencies be "truthful" and "neutral, nonpartisan." The requirements read well on paper. The enforcement behind them does not hold up under scrutiny.
Merve Hickok, president of the Center for AI and Digital Policy, identified critical design failures in the memo. Vendors evaluate their own model performance, with no independent assessment required. Existing contracts need only be modified "to the extent practicable" — language Hickok called a "massive loophole" that effectively exempts current deployments from new standards. When users encounter problems, the complaint process routes them to the vendor rather than to government oversight or public review.
The xAI contract makes the enforcement gap concrete. Grok, xAI's language model, holds a federal contract signed in September 2025. The model has produced documented bias issues including antisemitic conspiracy content and white supremacist rhetoric. Its model card contains no discussion of bias. The contract continues.
Federal agencies were supposed to publish consolidated AI use-case inventories by November 2025. The deadline passed without compliance. The public still lacks basic information about which agencies deploy which models for what purposes.
Funding tells its own story. Congress appropriated $8 million for DOGE's AI modernization efforts — down from the $45 million the White House requested. Appropriators specifically noted that DOGE's early work "ended up raising government spending" rather than cutting it. The office tasked with making government AI more efficient lost most of its modernization budget before it could demonstrate results.
The real-world weight of this oversight gap is significant. Government AI decisions shape permits, benefits, AI civic engagement tools, and civil rights determinations — areas where errors carry consequences that outlast any software update cycle. Chris Radich, UiPath's public sector CTO, has argued that agencies need "operational accountability" built into their AI systems: logged recommendations, defined ownership structures, and clear escalation pathways. Government leaders navigating these requirements can learn from how compliance sets the clock on MCP adoption. Gartner projects that most government agencies will require explainable AI and human-in-the-loop review for citizen-affecting decisions before the decade ends.
That kind of accountability does not emerge from four-page policy documents. It requires purpose-built systems — audit trails at the agent level, approval workflows for sensitive actions, and access controls granular enough to map to existing procurement and compliance structures. AgentPMT's audit system, for instance, logs every agent request and response with full payload inspection, and its human-in-the-loop capability pauses agent actions for approval on high-stakes decisions. The accountability mechanisms that federal governance will eventually require are already available in production platforms — the gap is adoption, not invention.
The Deployments Will Not Wait
The framework may never become law. It has failed in Congress twice, bipartisan support has not materialized, and the legislative calendar is filling with other priorities.
The deployments, however, are not paused pending legislation. Public sector AI has moved from pilot programs to production systems that handle real decisions for real people. State and local agencies are deploying agentic AI for citizen services, administrative workflows, and compliance monitoring. Federal adoption will continue to accelerate regardless of the framework's fate.
The practical concern for government technology leaders is straightforward: whether their governance can keep pace with their deployment speed. Building audit systems, enforcing model-level access controls, and establishing human oversight for citizen-affecting decisions cannot wait for a framework that has stalled twice in Congress.
The federal government published four pages to govern more than a thousand AI projects. Whether Congress acts on those pages or not, the projects will keep running — and the people affected by their outputs deserve to know that someone built the accountability to match.
Sources
- White House Releases National Policy Framework for Artificial Intelligence — WilmerHale
- White House AI Framework Pushes for Broad Preemption of State Laws — Governing
- White House AI framework calls for preemption of state laws — Roll Call (Allison Mollenkamp)
- AI Adoption Rapidly Growing in Public Sector — Gallup (Christos Makridis)
- White House Takes Aim at Biased AI in Government, Leaves Key Gaps — Lawfare (Merve Hickok)
- 100 Days of DOGE: Assessing Its Use of Data and AI to Reshape Government — Tech Policy Press
- GSA debuts new generative AI tool for workers — FedScoop (Rebecca Heilweil)
- Executive branch budget pact includes IT investments, less for DOGE — FedScoop (Matt Bracken)
- Agentic AI Turns Government Workflows Into Autonomous, Governed Systems — StateTech Magazine (Adam Stone)
- From national AI policy to agency execution — Nextgov/FCW (Chris Radich)