Estera Sava
15 Dec 2025 / 8 Min Read
Ashwin Das Gururaja, Senior Engineering Manager, and Devang Gaur, Senior Product Manager at Adobe, deep-dive into why data foundations are the pillars of autonomous agentic payments.
Payments teams are currently transforming significantly, moving from static, rule-based routing to what is now known as Agentic AI orchestration. For years, ‘optimisation’ was synonymous with maintaining massive decision trees: rules that functioned well when the ecosystem was smaller and outcomes were predictable.
This shift is driven by more than just technological capability; it is a response to a changing economic reality. In the past era of low-interest capital, merchants could oftentimes afford to address decline rates simply by increasing traffic volume or tolerating higher operational overheads.
Today, the landscape is different. With inflation squeezing margins and the cost of capital rising, ‘efficiency’ has replaced ‘growth at all costs’ as the primary KPI for digital businesses. Payments teams are no longer viewed merely as utility providers but as critical revenue recovery engines.
In this climate, a static 1% lift in authorisation rates or a reduction in processing costs is no longer just an optimisation metric but a boardroom imperative. The goal has evolved: Agentic AI aims to observe issuer behaviour in real time, learning from outcomes faster than human analysts can update spreadsheets. The objectives are simple: higher approvals, lower friction, and better cost management.
Yet, there is a critical caveat to this technological leap: autonomy is only as reliable as the data behind it. If the orchestration layer lacks clarity, any AI system built on top of it will simply make the wrong decisions at high speed. Before scaling autonomy, merchants must establish the data foundations that enable intelligent learning.
Despite the hype surrounding AI, the payments infrastructure often runs on a patchwork of legacy standards. A significant issue facing merchants today is that payment service providers (PSPs) interpret the same ISO concepts in drastically different ways.
For example, a single transaction failure can be returned with three distinct descriptors depending on the provider:

Figure 1: The power of data normalisation in payments orchestration
These discrepancies are not trivial. If an autonomous agent mistakes a temporary issuer outage for a fraud signal, it will apply the incorrect fix, potentially escalating friction or abandoning the transaction too early. Furthermore, merchants frequently miss critical context – such as network performance, device trust scores, or 3DS results – seeing only the request and the result, but not the reason.
With merchants losing an estimated 2.1% of their global revenue annually to payment performance issues, the inability of AI to discern fixable failures from final ones without a reliable data layer creates a significant performance ceiling.
To move from simple optimisation to true autonomy, merchants must build three specific data layers that work together:
1. Normalisation – removes ambiguity and reduces false learning. It involves creating a consistent internal definition for decline reasons and response codes, ensuring that soft declines and hard declines are treated differently, and separating technical issues from risk signals.
2. Enrichment – an AI agent cannot guess the rules of the road; to exhibit intelligent behaviour, the system must be taught the ‘common sense’ of payments. Enrichment provides the necessary context, so decisions are not made in a vacuum. This involves giving the agent domain knowledge, including:

Figure 2: The data architecture of agentic payments: normalisation, enrichment, and governance
3. Governance – good governance makes autonomy accountable rather than mysterious. This layer requires tracking how every decision is made and logging the version history for routing strategies. If an AI-driven decision is questioned, the merchant must have the capacity to explain the inputs and logic.
With a strong data foundation in place, autonomy should begin slowly.
The safest rollout path is shadow mode. In this phase, the agent observes traffic and recommends actions, which are then measured against current routing strategies. Crucially, there is no revenue exposure or risk to customers during this phase. Execution is only enabled once the agent proves that its decisions consistently outperform the baseline.
Once active, guardrails keep autonomy in control. These controls, such as retry limits, caps on re-routing, and instant rollback capabilities, protect against performance drops and excessive retry penalties.

Figure 3: The safe rollout strategy: from shadow mode to active guardrails
Merchants evaluating AI-driven orchestration often focus heavily on uplift; however, the focus should arguably be on transparency and control. Payments are highly regulated, and merchants remain accountable for operational outcomes regardless of third-party automation.
The risk of a ‘black box’ system goes beyond poor performance, creating a direct compliance liability. Consider a scenario where an autonomous agent systematically starts to decline transactions from a specific postal code or card range because it statistically correlates them with higher chargeback rates. While mathematically ‘optimal’ for fraud reduction, this behaviour could inadvertently violate fair lending laws or financial inclusion mandates by effectively redlining a demographic.
Should a regulator or auditor ask why a specific segment of customers was blocked, the answer cannot be ‘the model decided it’. Without granular explainability (i.e., the ability to trace exactly why the AI formed a specific correlation and what data points it prioritised), a merchant faces potential regulatory fines and reputational damage. In the eyes of compliance bodies, accountability cannot be outsourced to an algorithm.
To avoid these risks, merchants should ask vendors the following:
If a merchant cannot explain why decline patterns shifted, it becomes a compliance risk. AI should improve traceability, not remove it.
Agentic payments promise substantial gains, including more approvals, a better customer experience, and lower operational load. However, without good data foundations, autonomy is merely a guessing engine.
Merchants should invest first in normalisation, enrichment, governance, and controlled rollout. The equation is straightforward: better data leads to smarter decisions and safer autonomy. Everything else is a distraction.

Ashwin Das Gururaja is a Senior Engineering Manager at Adobe, where he leads the Commerce Payments & Risk platform teams. His work spans payment services, fraud mitigation, order processing, subscription lifecycle management, and checkout and commerce experiences. He currently focuses on payment optimisation, fraud defence, and applying AI to large-scale commerce systems.

Devang Gaur is a Senior Product Manager at Adobe, leading global payments, risk, and fraud initiatives. He drives higher authorisation rates, lower churn, and compliance with evolving mandates. Previously at PayPal’s Braintree team, he led payment optimisation products such as Smart Retries. His work spans ML-driven payments frameworks, large-scale routing migrations, and multi-million-dollar revenue impact.
Adobe is a global leader in creativity and digital experience solutions. Through its Creative Cloud, Document Cloud, and Experience Cloud offerings, Adobe empowers individuals and enterprises to design, create, and deliver exceptional digital experiences.
The Paypers is the Netherlands-based leading independent source of news and intelligence for professional in the global payment community.
The Paypers provides a wide range of news and analysis products aimed at keeping the ecommerce, fintech, and payment professionals informed about the latest developments in the industry.
Current themes
No part of this site can be reproduced without explicit permission of The Paypers (v2.7).
Privacy Policy / Cookie Statement
Copyright