Agentic Commerce: Risk Management Challenges

Asaad Faquir, Benjamin Werle
| 12/15/2025
Agentic Commerce: Risk Management Challenges

Autonomous AI transactions in the form of agentic commerce are here, but regulators and compliance frameworks might not be ready.

AI agents are about to start shopping for us. With the launch of the open-source Agentic Commerce Protocol (ACP), OpenAI has become the first AI platform to enable agentic transactions. In this model, OpenAI’s ChatGPT acts as the user’s AI agent, while Stripe’s Instant Checkout, powered by ChatGPT, provides the payment infrastructure that securely tokenizes and manages purchases using shared payment tokens. This integration marks a major step toward agentic commerce in which AI agents can independently research, purchase, and complete transactions on behalf of users. More recently, Mastercard and PayPal have also collaborated via Mastercard’s Agent Pay and PayPal’s wallet to allow users to conduct agentic commerce transactions.

While agentic commerce presents opportunities for efficiency and convenience for consumers and businesses, it also introduces compliance and oversight challenges that current regulatory frameworks are not designed to address. Financial services organizations, fintechs, regulators, and legislators must take steps to understand and address emerging risks and regulatory gaps and remain resilient as agent-driven commerce evolves.

Keep informed
Sign up to receive the latest insights on strengthening your financial crime program.

Agentic commerce in practice

Agentic commerce involves autonomous systems that conduct transactions with minimal user intervention in which the decision logic shifts to algorithms acting on delegated authority. This approach creates uncertainty regarding where accountability begins and ends. For example, an AI agent might automatically reorder office supplies based on usage data or negotiate freight rates across multiple logistics providers without explicit approval at each step.

These seemingly routine transactions illustrate how intent, consent, and authorization can blur when agents act independently and have a material impact on whether the transaction is considered consumer or commercial in nature. Common scenarios include:

  • Purchase negotiation. AI agents talk to other AI agents to settle on a transaction purchase to meet predetermined guidelines, such as quantity, total price, and guaranteed delivery time frames. One example is a car purchase without the buyer or the dealer ever directly interacting. The AI agents can negotiate all the details of a trade-in, purchase, and delivery and even arrange financing that requires people only when signing the required documents.
  • Purchasing orchestration. AI agents select and purchase goods or services on e-commerce platforms using prefunded accounts. In this scenario, an AI agent evaluates inventory levels, identifies a purchase need, engages directly with an e-commerce site, such as Etsy or Amazon, and places the order using a virtual card that withdraws funds from a prefunded account.
  • Subscription management. AI agents initiate or cancel recurring payments as needed based on usage patterns, cancelation policies, and even retention incentives.

From an AML and consumer protection viewpoint, although these models might rely on digital payment flows and solutions already in use, the autonomy of agents adds layers of risk and ambiguity. In particular, the use of agents could erode the inherent protection of an arms-length transaction. Specifically, particular goods and services could be exchanged in a way that gives the appearance of a legitimate transaction but is actually nefarious in intent, including sanctions evasion, terrorist financing, or direct money laundering.

The commercial incentive for adopting agentic models is significant. Retailers and payment providers view autonomous transactions as the next frontier that could reduce friction in checkout and create 24-hour microcommerce ecosystems. For financial services organizations, however, the same scale that supports efficiency magnifies exposure, and that is an inflection point for how to manage the novel risk of ACP.

AML blind spots in agent-mediated transactions

Autonomous agents present unique anti-money laundering (AML) and Bank Secrecy Act (BSA) risks. Key concerns, among others, include:

  • Illicit prefunding. Criminals could load agent-controlled accounts with unlawful proceeds and disguise the origin of funds.
  • High-value purchases. Agents transacting luxury goods or electronics could enable illicit value transfer in ways that are harder to detect, such as purchases under fair market value negotiated by autonomous agents.
  • Sanctions evasion. AI agents could be trained to execute transactions across multiple jurisdictions to circumvent sanctions and obfuscate participants in the purchase flow.

The programmability of agents on both sides of a transaction could facilitate automated placement, layering, and integration where AI agents obfuscate the transaction’s origins, intentionally fragment transactions across jurisdictions or platforms, or worse, use AI agents specifically trained to defeat the transaction monitoring process to move funds across businesses, banks, and borders.

Traditional red flags, such as structured deposits or sudden account activity, might not apply when transactions are autonomously generated within parameters that mimic legitimate consumer behavior. It is possible, if not probable, that traditional AML programs are not designed to account for bot-to-merchant or bot-to-bot transactions, which results in a gap in monitoring and suspicious activity reporting.

Regulatory frameworks under strain

Blurred accountability could be pervasive in this process and lead to questions regarding whether the user, the agent, or the provider is responsible for intent and authorization. Most financial regulations predate the concept of autonomous AI agents.

Areas to consider reviewing include:

  • Error resolution and liability protections. Depending on the payment mechanism, consumer dispute rights depend on identifying who authorized the transaction. A consumer has clear rights within regulations (for example, Regulation E), and payment processes (for example, National Automated Clearing House Association standards), while an agent might not be granted those same rights. The potential of financial liability shifting unpredictably between user, merchant platform, and the financial services organization due to an AI agent’s involvement might result in heightened exposure under error-resolution and refund obligations.
  • Lending protections. If agents participate in lending or subscription models, such as buy now, pay later arrangements, questions likely will arise about how credit or authorizations could be extended and negotiated through an AI agent. For example, how would the accuracy of the consumer’s data presented by an agent affect creditworthiness assessments? Could a bias be inherent in the credit process or created over time? Can an AI agent create a permissible purpose nexus for a consumer’s credit information to be accessed? Lastly, does the existing regulatory framework allow credit to be offered and provided without a clear human-in-the-loop?
  • UDAAP and UDAP. Automated decisions by agents could expose firms to claims alleging unfair, deceptive, or abusive acts or practices (UDAAP) and unfair or deceptive acts or practices (UDAP). Even unintended algorithmic bias or opaque pricing logic could meet the legal threshold for “deceptive” and create litigation and reputational risks.

Without targeted updates, existing frameworks might struggle to address consumer protection, fraud, and accountability in agent-driven ecosystems. Moreover, if left only to the business parties to manage risk through contractual terms, the rights, representations, and warranties sections of the terms and conditions are likely to create an inconsistency from both consumer and commercial standpoints.

Third-party risk in agentic ecosystems

As banks and fintechs develop solutions and offerings within an agentic commerce ecosystem, third-party oversight becomes more complex, and traditional third-party risk models might struggle to accommodate the adaptive nature of AI-based systems. Overseeing machine-to-machine commerce will require new technical capabilities and coordination among stakeholders. Key considerations in third-party risk management for organizations include:

  • Contractual safeguards. Agreements must evolve beyond standard third-party data collection and use clauses to include specific details assessing AI model behavior, training data governance, and implementing restrictions. Financial services organizations should define responsibilities for data handling, fraud liability, and consumer redress when AI agents are used.
  • Improved technical controls. Shared frameworks for transaction oversight and fraud detection are critical. Financial services organizations should require access to all digital audit trails for transactions that involve AI agents to track and analyze decision chains between agents and provide human-in-the loop oversight.
  • Accountability clarity. Financial services organizations will need to clearly define responsibility for cases in which AI agents execute transactions but later are claimed to be unauthorized or fraudulent. Organizations should make clear that their expectations are that the liability rests exclusively with the third party.

Managing third-party risk in agentic commerce will depend on how well organizations keep AI systems accountable, transparent, and subject to rigorous human oversight.

Balancing innovation and supervision

The gap between innovation and supervision is not new. When digital wallets and peer-to-peer payments emerged in the 2010s, regulatory analysis and interpretive guidance to help financial services organizations manage the risks moved at a materially different pace than the technical innovation. Agentic commerce could follow a similar cycle but with higher stakes given the need to support both AI-driven innovation and protecting consumers and the financial system from misuse or malicious actors.

Without proactive action, risks for financial services organizations include:

  • AML and sanctions vulnerabilities. Exploitation of prefunded accounts and anonymized transactions could lead to broader AML and sanctions risks.
  • Consumer harm. Disputes and errors for which authorization and liability are unclear could have negative impacts on consumers.
  • Regulatory arbitrage. Gaps across borders and legal jurisdictions could create opportunities for exploitation.
  • Legal and financial exposure. Banks and other businesses that are a party to the agentic commerce process could find themselves legally and financially responsible for mistakes or operational errors.

The challenge is clear. The existing regulatory framework, which often depends on precise legal terms such as “natural persons” to determine scope and applicability, will need to be assessed and, likely, enhanced to accommodate emerging technologies and entities that fall outside these traditional categories. Bridging this divide will require reevaluating assumptions about legal identity, agency, and liability so that regulation keeps pace with innovation.

Mitigating risk

Agentic commerce represents an innovation in digital payments and a regulatory stress test. The launch of agentic commerce protocols underscores that the future of AI-driven transactions is no longer hypothetical. Yet, the autonomy of agents challenges assumptions embedded in AML programs, consumer protection laws, and supervisory frameworks.

For financial services organizations, fintechs, and regulators, the task ahead will be to close oversight gaps while preserving the benefits of agent-mediated commerce and maintaining sound risk management practices for new offerings. This effort will require collaboration across industry and government to update monitoring, accountability, and governance models in step with accelerating technology adoption in uncharted waters.

Fight financial crime with a team that understands the stakes

With more than 40 years of experience working with financial services companies, our financial crime specialists know how to help you address risks in ways that make sense for your organization.