Banking Automation Enters the Agent Era as New AI Laws Loom

Banking Automation Enters the Agent Era as New AI Laws Loom

By Stephanie GoodmanMarch 28, 2026

Visa, Mastercard, and crypto networks are racing to build payment infrastructure for autonomous AI agents, but Colorado's AI Act, the EU AI Act, and federal guidance will determine how much autonomy those agents actually get — and the identity verification gap between the two tracks remains unsolved.

AI Agents In BusinessEnterprise AI ImplementationAgentic Payment SystemsAI Agent IdentityNews

Banking Automation Enters the Agent Era as New AI Laws Loom

Visa launched its Intelligent Commerce platform with over 100 partners, more than 30 in its sandbox, and upward of 20 AI agents actively integrating. Mastercard followed with Agent Pay, now being wired into Fiserv's merchant infrastructure across Clover POS terminals and eCommerce platforms. Coinbase shipped agentic wallets in February on the x402 protocol. BNB Chain deployed ERC-8004 for verifiable agent identities four days earlier. The card networks and the crypto networks are converging on the same unsolved problem — how to let an AI agent move money on behalf of a human — and they are building financial automation from opposite ends.

Visa's Trusted Agent Protocol, launched in October 2025, has already processed hundreds of agent-initiated transactions with partners including Skyfire, Nekuda, Ramp, and PayOS. Mastercard's approach replaces card numbers with network tokenization and swaps out human verification signals for AI-specific validation. Both are building infrastructure where the agent authenticates, the network authorizes, and the payment completes without a human touching a checkout screen.

The crypto side arrived at the same destination through a different door. Coinbase's agentic wallets give AI agents their own on-chain accounts with programmable spend limits. BNB Chain's ERC-8004 standard creates verifiable identities for non-human actors on the blockchain, a prerequisite for any on-chain transaction where the initiator is software. These aren't proof-of-concept demos. They are production systems accepting real money.

A Visa and Morning Consult survey found that 47% of U.S. shoppers already use AI tools for shopping tasks — price comparison, product research, deal alerts. That figure represents current behavior, not projected adoption, which means the consumer side of agent commerce is already establishing habits that payment infrastructure will need to serve.

But infrastructure alone doesn't determine how far agent autonomy goes. Regulation does. And in 2026, regulation is arriving with specific dates, specific requirements, and specific penalties.

Three Deadlines That Will Shape Agent Payments

Colorado's AI Act, SB24-205, takes effect on June 30, 2026. Any AI system that contributes to a "consequential decision" — lending approvals, insurance underwriting, credit determinations — must now come with disclosure requirements, impact assessments, and consumer appeal rights. A lending model that uses an AI agent to evaluate creditworthiness must document how the agent reached its recommendation. The consumer who gets denied must be told that AI contributed to the decision and given a path to challenge it.

The EU AI Act imposes its high-risk AI requirements on financial services starting August 2, 2026. Auditable documentation, bias testing, and explainability are mandatory for AI systems in banking, insurance, and credit scoring. European regulators are not treating these as aspirational guidelines. Non-compliance carries fines scaled to global revenue.

Illinois moved first. The state's Consumer Financial Data and Business Practices Act amendment expanded oversight of predictive analytics in financial services effective January 1, 2026. Financial institutions using AI models for consumer-facing decisions in Illinois are already operating under the expanded rules.

At the federal level, the December 2025 executive order directed agencies toward a singular national AI framework and pointed the FTC at AI-related unfairness and deception. Existing federal statutes — the Equal Credit Opportunity Act, the Fair Credit Reporting Act, UDAAP prohibitions, and Bank Secrecy Act anti-money-laundering rules — already apply to any financial decision, regardless of whether a human or an agent made it. How enforcement agencies interpret those statutes when the decision-maker is software operating autonomously remains the open variable.

None of this is speculative. These are enacted laws with published effective dates.

The Identity Problem Nobody Has Solved

The payment rails are being built. The regulations are being written. Between them sits a structural gap that neither side has closed: AI agents cannot establish identity through any existing financial verification framework.

Brian Armstrong, Coinbase's CEO, put it bluntly — AI agents cannot open bank accounts. Every identity verification system in financial services assumes a human on the other end. Know-Your-Customer processes require government-issued identification. Account opening flows expect a person to be present, either physically or through document selfie verification. An AI agent has no face to photograph, no driver's license to upload, no Social Security number to validate.

This is more than a procedural inconvenience. Without a recognized identity, an agent cannot hold funds independently, establish a credit relationship, or satisfy the regulatory requirements that attach to financial account holders. The entire compliance architecture of banking — built over decades of regulation — presumes human actors.

PCI compliance standards present a parallel problem. The rules governing how payment card data is handled were designed around human transaction flows. An AI agent that initiates a payment, receives card data, or processes a transaction exists in undefined territory under current PCI standards. The standards don't address non-human transaction initiators because, until recently, there were none.

Fraud detection systems face their own version of this. Traditional fraud models track human behavioral patterns — typing speed, mouse movement, geolocation consistency, transaction timing that matches daily routines. An agent doesn't type. It doesn't have a mouse. It can initiate a transaction from any server in any location at any hour. Legitimate agent transactions may look identical to the patterns fraud systems flag as suspicious.

Johannes Kolbeinsson, CEO of PAYSTRAX, has proposed a "Know-Your-Agent" framework — KYA as a counterpart to KYC. The concept requires agents to carry verifiable credentials: who built the agent, who authorized it, what permissions it holds, and what spend limits apply. It's a reasonable framework that doesn't yet exist in any regulatory standard.

Anti-money-laundering rules compound the problem further. BSA/AML compliance requires identifying the beneficial owner behind financial transactions. When an agent initiates a payment, the legal question of who bears responsibility — the agent's developer, the platform operator, the human who authorized the task, or the business that deployed the agent — has no settled answer. Securities regulators at the SEC and FCA haven't addressed agent-conducted crypto transactions either, leaving an entire category of on-chain activity without clear legal footing.

AgentPMT's wallet-signature authentication using EIP-191 offers one approach to the identity question. Each agent gets a cryptographic wallet that serves as both payment instrument and identity credential. The wallet address is verifiable, the signing key proves the agent's identity without exposing secrets, and every transaction is recorded on-chain. It doesn't satisfy KYC in the traditional sense — nothing will until regulators define what KYC means for non-human actors — but it creates the kind of auditable, verifiable identity trail that a KYA framework would require.

What the Card Networks and Smart Contracts Are Each Getting Right

Visa and Mastercard are solving agent payments through familiar infrastructure. Tokenized card credentials, network-level authorization, and merchant acceptance create a system where agents can transact across millions of existing payment endpoints. The advantage is reach. Any merchant that accepts Visa today can, in principle, accept a Visa-authenticated agent transaction tomorrow.

The limitation is transparency. Card network transactions clear through intermediaries. Settlement happens days later. Dispute resolution follows legacy processes designed for human cardholders. An agent that makes a purchasing decision in milliseconds and gets disputed through a process that takes weeks creates a mismatch between the speed of automation and the pace of oversight.

Smart contract payment systems work from the opposite assumption. On-chain transactions are immediate, transparent, and programmatically enforceable. An agent paying through a smart contract can have its spend limits enforced at the protocol level, with every transaction permanently recorded on a public ledger. AgentPMT's x402Direct routes agent payments through auditable smart contracts on Base, enforcing payment terms and verifying delivery on-chain — providing the kind of transaction-level transparency that regulations increasingly demand.

The trade-off is adoption. Stablecoin payments don't yet have the merchant acceptance network that card rails provide. An agent that needs to buy office supplies from a traditional retailer still needs a card. An agent paying for cloud computing, API calls, or digital services can operate entirely on-chain.

Financial institutions evaluating these approaches don't have to choose one. The card networks will handle agent transactions where merchant acceptance matters most. On-chain payment infrastructure will handle agent-to-agent transactions, digital service payments, and scenarios where programmatic spend controls and audit trails are the priority. The firms that build for both will have the most operational flexibility as regulations clarify which transaction types require which controls.

Preparing for June and August

Financial services firms face two concrete deadlines in the next five months. The Colorado AI Act hits June 30. The EU AI Act's high-risk provisions hit August 2. Firms that operate in either jurisdiction — or serve customers in either jurisdiction — need to act before those dates, not after.

The first step is mapping every AI deployment against Colorado's "consequential decision" definition. Any model or agent that touches lending, underwriting, or credit determinations falls under the statute. That includes models where AI is one input among many — the law doesn't require AI to be the sole decision-maker, just a contributor. Firms that haven't inventoried which of their automated systems qualify as consequential are already behind on compliance preparation.

Underwriting models need auditing against the EU AI Act's documentation requirements. The regulation demands that firms explain how their AI systems work, demonstrate they've tested for bias, and maintain documentation sufficient for regulatory review. For firms using AI underwriting — where an AI agent pulls data from multiple sources, runs it through a model, and produces a recommendation — the documentation burden extends across every step the agent takes. Full request and response capture for every agent action, the kind of regtech AI infrastructure AgentPMT provides natively, becomes an AI compliance requirement rather than an operational preference.

Fraud detection systems need evaluation for a different reason. Can the firm's fraud models distinguish between an agent-initiated transaction and a human-initiated one? If not, legitimate agent commerce will trigger false positives at scale, creating operational drag that erodes the efficiency gains the agents were deployed to capture.

Human-in-the-loop protocols for material financial decisions deserve attention now, before regulators mandate them. Colorado's appeal rights provision implies that somewhere in the chain, a human must be reachable. The EU's explainability requirements point in the same direction. Establishing clear escalation paths — where an agent pauses, alerts a human decision-maker, and waits for approval before proceeding — is both a regulatory strategy and an operational safeguard. AgentPMT's mobile approval flow, where agents send requests to human operators and pause until receiving a response, was built for exactly this kind of checkpoint.

A PYMNTS Intelligence survey found that 43% of CFOs anticipate significant agentic AI impact on budget reallocation in the near term. That figure signals where executive attention is heading — toward treating agent deployments as material operational changes that warrant dedicated budget, not as incremental technology upgrades absorbed by existing IT spending.

The payment infrastructure for agent commerce is being built by organizations with the scale to make it permanent. Visa's partner list, Mastercard's Fiserv integration, Coinbase's on-chain wallets — these are not experiments that will be quietly shut down. The regulatory framework is being enacted by legislatures with the authority to enforce it. The gap between what agents can technically do and what they are legally permitted to do will narrow over the next six months.

Financial institutions that use that window to map their AI deployments, establish audit trails, deploy AI compliance tools, and evaluate payment rail options will be positioned to operate banking automation under whatever rules emerge. The ones that wait for final guidance before starting will find themselves retrofitting compliance into systems that were built without it — which is always more expensive, harder to get right, and more likely to fail.


Sources

  • Visa and Partners Complete Secure AI Transactions — Visa
  • Banks Shift AI From Chatbots to Autonomous Money Movement — PYMNTS
  • Fiserv Integrates Mastercard Agent Pay Into Merchant Platform — PYMNTS
  • From bottlenecks to breakthroughs: How agentic AI is reshaping insurance — Microsoft
  • AI Agents Cannot Open Bank Accounts — FinTech Weekly
  • AI in Financial Services: Popular Use Cases and the Regulatory Road Ahead — Venable LLP
  • Outlook — Financial Services Regulation in 2026 — Freshfields
  • AI regulatory compliance priorities financial institutions face in 2026 — FinTech Global
  • Agentic Payments: The Next Fintech Revolution? — The Payments Association
  • 5 Ways Agentic AI Is Transforming Insurance Underwriting in 2026 — InsureTech Trends
Banking Automation Enters the Agent Era as New AI Laws Loom | AgentPMT