When AI Agents Start Making Purchases, How Should the Payments Industry Respond?

This piece is by Donald Kossmann, Chief Technology Officer at Chargebacks911 (https://www.linkedin.com/in/donald-kossmann-8915b1/)

Artificial intelligence is rapidly moving beyond assisting decisions and into executing them. Across commerce, finance, and procurement systems, a new generation of autonomous software agents is beginning to take on tasks that once required direct human action. Banks and payment networks have already begun testing these capabilities in live environments. For example, Santander and Mastercard recently completed what they describe as the first live end-to-end payment executed by an AI agent within a banking infrastructure.

For businesses and consumers, the potential benefits are clear. AI agents promise faster decision-making, reduced friction in purchasing, and the ability to automate routine commercial activity. In many cases they can process information and compare options far more efficiently than any human buyer. But when software begins spending money on behalf of people and organisations, the assumptions that underpin digital commerce begin to change. One of the most significant shifts is the weakening connection between technical authorisation and human intent.

When authorisation no longer reflects intent

For decades, digital payments have relied on a straightforward signal of intent. A user logs into an account, reviews a purchase, and confirms the transaction. That moment of confirmation provides a clear record of who initiated the purchase and why. Agentic commerce introduces a different model.

When an AI agent operates persistently on behalf of a user, purchases may occur without a person directly initiating each step. The agent may be operating within its permissions, yet the outcome may still differ from what the user expected or wanted. From a payments and dispute perspective, this creates a new grey area. Transactions may be technically authorised while still being challenged by the customer afterwards.

Historically the industry has focused heavily on protecting access credentials and preventing account compromise. The emerging challenge is different. It is ensuring that the decisions made by autonomous systems remain aligned with user intent.

Why current authorisation frameworks may struggle

Most modern digital ecosystems rely on delegated access models such as OAuth to allow applications to act on a user’s behalf. These frameworks were designed for situations where software performs limited, clearly defined actions after receiving permission. Persistent AI agents introduce a more complex operating model.

Permissions may remain active long after a user’s expectations have changed. Agents may interact with multiple merchants, suppliers, and services over extended periods of time. The decisions they make may also involve interpretation and optimisation rather than simple rule execution. The result is not necessarily that existing frameworks become obsolete. However, they will likely need to evolve. More granular permissions, clearer expiration conditions, and stronger revocation mechanisms will become increasingly important. Organisations will also need better contextual controls to determine when and how an agent should be allowed to act.
Equally important is transparency. Businesses must be able to demonstrate not only what permissions an agent was granted, but how those permissions were exercised over time.

Governance must evolve alongside automation


As AI agents begin connecting to procurement platforms, supplier marketplaces, and corporate payment systems, governance becomes essential. Before granting an AI system purchasing authority, organisations should establish clear safeguards. Permissions should be tightly scoped and time-bound. Agents should never have unlimited purchasing authority. Spending limits, supplier restrictions, and category constraints should all be clearly defined.

Transparency is also critical. Systems must provide detailed logs explaining why an agent selected a particular vendor, switched suppliers, or executed a transaction. Human oversight remains important as well. Organisations should retain the ability to intervene quickly if an agent behaves unexpectedly or begins operating outside acceptable boundaries. Finally, robust evidence capture will become essential. As disputes arise, businesses will need to demonstrate exactly what the agent was allowed to do and what actions it ultimately took.
Without these safeguards, companies risk extending financial authority without extending governance.

AI agents as intelligence targets

Agentic commerce also introduces new security considerations. Sophisticated shopping and procurement agents accumulate large volumes of behavioural data. Over time they learn purchasing preferences, pricing sensitivities, supplier relationships, and operational timing patterns. Within enterprise environments, this information can reveal meaningful insights about how organisations operate.
If compromised, these systems could provide adversaries with access to valuable intelligence. Attackers might gain insight into procurement strategies, identify supply chain vulnerabilities, or use behavioural data to launch targeted manipulation campaigns. In that sense, highly autonomous agents may become attractive targets not only for fraud but also for reconnaissance.

The emerging attack surface of automated negotiation

Another risk lies in how AI agents make purchasing decisions. If agents are responsible for negotiating pricing or selecting suppliers dynamically, adversaries may attempt to manipulate the signals those systems rely on. Attackers could influence pricing data, poison training inputs, or introduce malicious suppliers that appear legitimate to automated systems.
In many cases the resulting transaction could still appear authorised and compliant with company policy. The manipulation occurs earlier in the decision process, shaping the options the agent believes are optimal. This is why explainability and auditability in automated decision-making will become increasingly important.

Rethinking trust in automated commercial ecosystems

The rise of agentic commerce also changes how organisations must evaluate vendors and partners. Traditional third-party risk management focuses on assessing the security posture of the organisations involved in a transaction. As autonomous agents begin acting on behalf of those organisations, that model becomes incomplete.

Companies must begin assessing the behaviour and governance of the systems themselves. This includes understanding how permissions are structured, how decisions are recorded, and how anomalous activity is detected and controlled. In other words, trust must extend beyond the organisation to the algorithm.

Preparing for autonomous commerce

Agentic AI has the potential to transform how transactions occur across both consumer and enterprise environments. Automation will create efficiencies and open new opportunities across commerce. However, as software systems gain the authority to make purchasing decisions, the payments ecosystem must ensure that the trust layer evolves at the same pace.

The challenge is not simply securing access to systems. It is ensuring that autonomous decisions remain aligned with the people and organisations those systems represent. When AI agents begin making purchases, the industry will need new ways to verify intent, maintain transparency, and preserve trust across the transaction lifecycle. Payment networks are already beginning to explore this challenge, including initiatives aimed at establishing verifiable intent frameworks for AI-initiated transactions. Because once machines start spending money, the consequences will extend far beyond the transaction itself.

About alastair walker 19300 Articles
20 years experience as a journalist and magazine editor. I'm your contact for press releases, events, news and commercial opportunities at Insurance-Edge.Net

Be the first to comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.