AI agents are about to start shopping for us. With the launch of the open-source Agentic Commerce Protocol (ACP), OpenAI has become the first AI platform to enable agentic transactions. In this model, OpenAI’s ChatGPT acts as the user’s AI agent, while Stripe’s Instant Checkout, powered by ChatGPT, provides the payment infrastructure that securely tokenizes and manages purchases using shared payment tokens. This integration marks a major step toward agentic commerce in which AI agents can independently research, purchase, and complete transactions on behalf of users. More recently, Mastercard and PayPal have also collaborated via Mastercard’s Agent Pay and PayPal’s wallet to allow users to conduct agentic commerce transactions.
While agentic commerce presents opportunities for efficiency and convenience for consumers and businesses, it also introduces compliance and oversight challenges that current regulatory frameworks are not designed to address. Financial services organizations, fintechs, regulators, and legislators must take steps to understand and address emerging risks and regulatory gaps and remain resilient as agent-driven commerce evolves.
Agentic commerce involves autonomous systems that conduct transactions with minimal user intervention in which the decision logic shifts to algorithms acting on delegated authority. This approach creates uncertainty regarding where accountability begins and ends. For example, an AI agent might automatically reorder office supplies based on usage data or negotiate freight rates across multiple logistics providers without explicit approval at each step.
These seemingly routine transactions illustrate how intent, consent, and authorization can blur when agents act independently and have a material impact on whether the transaction is considered consumer or commercial in nature. Common scenarios include:
From an AML and consumer protection viewpoint, although these models might rely on digital payment flows and solutions already in use, the autonomy of agents adds layers of risk and ambiguity. In particular, the use of agents could erode the inherent protection of an arms-length transaction. Specifically, particular goods and services could be exchanged in a way that gives the appearance of a legitimate transaction but is actually nefarious in intent, including sanctions evasion, terrorist financing, or direct money laundering.
The commercial incentive for adopting agentic models is significant. Retailers and payment providers view autonomous transactions as the next frontier that could reduce friction in checkout and create 24-hour microcommerce ecosystems. For financial services organizations, however, the same scale that supports efficiency magnifies exposure, and that is an inflection point for how to manage the novel risk of ACP.
Autonomous agents present unique anti-money laundering (AML) and Bank Secrecy Act (BSA) risks. Key concerns, among others, include:
The programmability of agents on both sides of a transaction could facilitate automated placement, layering, and integration where AI agents obfuscate the transaction’s origins, intentionally fragment transactions across jurisdictions or platforms, or worse, use AI agents specifically trained to defeat the transaction monitoring process to move funds across businesses, banks, and borders.
Traditional red flags, such as structured deposits or sudden account activity, might not apply when transactions are autonomously generated within parameters that mimic legitimate consumer behavior. It is possible, if not probable, that traditional AML programs are not designed to account for bot-to-merchant or bot-to-bot transactions, which results in a gap in monitoring and suspicious activity reporting.
Blurred accountability could be pervasive in this process and lead to questions regarding whether the user, the agent, or the provider is responsible for intent and authorization. Most financial regulations predate the concept of autonomous AI agents.
Areas to consider reviewing include:
Without targeted updates, existing frameworks might struggle to address consumer protection, fraud, and accountability in agent-driven ecosystems. Moreover, if left only to the business parties to manage risk through contractual terms, the rights, representations, and warranties sections of the terms and conditions are likely to create an inconsistency from both consumer and commercial standpoints.
As banks and fintechs develop solutions and offerings within an agentic commerce ecosystem, third-party oversight becomes more complex, and traditional third-party risk models might struggle to accommodate the adaptive nature of AI-based systems. Overseeing machine-to-machine commerce will require new technical capabilities and coordination among stakeholders. Key considerations in third-party risk management for organizations include:
Managing third-party risk in agentic commerce will depend on how well organizations keep AI systems accountable, transparent, and subject to rigorous human oversight.
The gap between innovation and supervision is not new. When digital wallets and peer-to-peer payments emerged in the 2010s, regulatory analysis and interpretive guidance to help financial services organizations manage the risks moved at a materially different pace than the technical innovation. Agentic commerce could follow a similar cycle but with higher stakes given the need to support both AI-driven innovation and protecting consumers and the financial system from misuse or malicious actors.
Without proactive action, risks for financial services organizations include:
The challenge is clear. The existing regulatory framework, which often depends on precise legal terms such as “natural persons” to determine scope and applicability, will need to be assessed and, likely, enhanced to accommodate emerging technologies and entities that fall outside these traditional categories. Bridging this divide will require reevaluating assumptions about legal identity, agency, and liability so that regulation keeps pace with innovation.
Agentic commerce represents an innovation in digital payments and a regulatory stress test. The launch of agentic commerce protocols underscores that the future of AI-driven transactions is no longer hypothetical. Yet, the autonomy of agents challenges assumptions embedded in AML programs, consumer protection laws, and supervisory frameworks.
For financial services organizations, fintechs, and regulators, the task ahead will be to close oversight gaps while preserving the benefits of agent-mediated commerce and maintaining sound risk management practices for new offerings. This effort will require collaboration across industry and government to update monitoring, accountability, and governance models in step with accelerating technology adoption in uncharted waters.