Voltar ao blog

The Hidden Cost of Hype: Why Most AI Projects in Payments Don't Move the Revenue Needle

DEUNA
May 15, 2026

Every enterprise payments leader heard the same pitch in 2025. Add AI to your stack. Watch fraud drop, approvals rise, and revenue follow.

Two years into the spending wave, the data tells a different story.

In July 2025, MIT's Project NANDA published The GenAI Divide: State of AI in Business 2025. The researchers analyzed 300 enterprise AI deployments, interviewed 150 executives, and surveyed 350 employees. The headline finding has become impossible to ignore.

Despite an estimated $30 to $40 billion poured into generative AI initiatives, roughly 95% of enterprise AI pilots produced no measurable P&L impact. Only 5% drove rapid revenue acceleration.

Payments is one of the functions where this gap is most consequential. Every transaction has a binary outcome. Every dollar saved or lost flows directly to the bottom line. The feedback loop between a payment decision and its revenue impact is immediate and measurable in a way that few other business functions can match.

So what is going wrong? And what does the 5% that works actually look like?

The Pattern Behind the Failures

The MIT report is direct about the cause. According to lead author Aditya Challapally, the failure is "not the quality of the AI models, but the learning gap for both tools and organizations."

Generic AI tools are built to be flexible. Enterprise payment workflows are not. They are tightly coupled to PSPs, acquirers, fraud providers, ERPs, and reconciliation systems that each speak slightly different languages. When a payments team plugs a generic AI tool into that environment, three things tend to break.

The AI Sees Only a Slice of the Data

A fraud model trained inside a single PSP cannot observe transactions routed through a second or third PSP. A retry engine bolted onto one acquirer cannot reason about issuer behavior across the rest of the stack. This is not a fundamental limitation of AI. It is a data architecture problem. When transaction data lives in silos, the AI can only be as good as the fragment it can see.

The AI Cannot Act

Most of what gets sold as "AI in payments" is detection or process automation. A dashboard that flags risky transactions. An alert that recommends a retry. A report that surfaces decline patterns. These tools are useful, but detection without execution authority still requires a human to translate every insight into action. In payments, where conditions shift transaction by transaction, that latency has a cost.

The AI Does Not Learn from Outcomes

Closed-loop feedback requires the system to observe the result of its own decisions and adjust over time. That requires a unified, normalized data layer that most enterprises have not yet built. Without it, the model cannot improve because it cannot see the full picture of what happened after it acted.

These three problems are solvable. But solving them requires building the right foundation first: unified data across every provider, consistent definitions, and an architecture that lets the AI observe, act, and learn across the entire stack, not just one part of it.

Where the Cost Actually Shows Up

The damage from this gap is rarely a line item on a vendor invoice. It hides in metrics that looked broken before AI was deployed and continued to look broken after.

False declines are the clearest example. According to ClearSale's industry research, US merchants lost approximately $157 billion to falsely declined transactions in 2023. Of that total, an estimated $81 billion was never recovered, meaning the customer did not retry, did not switch to another payment method, and did not return to complete the purchase.

The remaining portion represents transactions where the customer tried again successfully, either through a retry, an alternative payment method, or a follow-up attempt. ClearSale also reports that false declines now cost merchants roughly 75 times more than confirmed fraud.

A separate analysis conducted by Checkout.com in partnership with Oxford Economics, based on surveys of over 1,500 medium to large enterprise merchants and 8,000 consumers across four major markets, put direct lost revenue from false declines at $50.7 billion in a single year. The methodologies differ. The order of magnitude does not.

The 2026 Global E-commerce Payments and Fraud Report from the Merchant Risk Council, surveying 1,278 merchants across 37 countries, captured the operational side of the same problem. Roughly 40% of merchants now use machine learning tools and intelligent payment routing in some form. Payment success rate and revenue are the two metrics merchants now rate as "extremely important."

Yet the same merchants report widening fragmentation across providers, fraud rules, and authorization logic. Those are exactly the conditions under which AI tools underperform.

A common pattern looks like this. A merchant deploys an AI fraud model that promises a 30% reduction in false positives. The model performs as advertised on the transactions it sees. Six months later, authorization rates have not moved. Revenue is flat. The reason is simple.

The rest of the stack is still running on static configuration. The retry logic. The routing rules. The reconciliation. The second and third PSPs. The AI improved one slice. The system as a whole did not change.

This is the hidden cost of hype. Not that AI failed, but that AI was deployed in a place where it could not act on what it saw.

What the 5% Does Differently

The MIT report makes a second finding that gets quoted less often. Among organizations that buy AI from specialized vendors and integrate it through partnerships, roughly 67% report success. Internal builds succeed at one third that rate. These numbers are not inconsistent with the overall 5% success rate: they reflect adoption patterns. The vast majority of organizations are still attempting internal builds, and most of those fail. The 67% figure describes what happens when organizations break from that pattern and choose a different approach.

In payments, that environment has a specific shape.

The AI has to sit above the entire stack, not inside one part of it. It needs unified access to authorization data, decline reasons, fraud signals, retry outcomes, and reconciliation records across every provider the merchant uses. Only if it has the authority to act on that data, rerouting a transaction, triggering a retry, adjusting a fraud rule, or escalating a recovery flow without a human in the loop, can it close the loop between insight and outcome. And it needs a feedback channel that lets it observe the result of its own decisions and improve over time.

From Observing to Acting

The 2026 MRC report found that 19% of merchants already have solutions in place to accept payments initiated by AI agents acting on behalf of consumers, transactions where an AI system completes the purchase autonomously, without a human present at checkout. Enterprise adoption sits at 28%. SMB and mid-market is at 12%. Another 63% of merchants are exploring or implementing solutions.

The merchants on the leading edge are not the ones who deployed the most AI. They are the ones who deployed AI in places where it could close the loop between decision and outcome. The difference is not the model. It is the infrastructure underneath it.

Athia, DEUNA's agentic payments intelligence engine, does not sit alongside the payments stack as a dashboard. She sits inside the orchestration layer that connects more than 400 PSPs, acquirers, alternative payment methods, and fraud providers through a single integration.

When Athia identifies a false decline pattern, she reroutes the next attempt to a higher-performing acquirer in real time. When she observes that a particular issuer responds better to retries at a specific time of day, she adjusts the schedule automatically. When she learns that a fraud signal is producing more false positives than fraud catches, she tunes the rule.

The result is the closed loop the MIT report describes as the difference between the 5% and the 95%. Detection becomes execution. Execution becomes learning. Learning compounds.

The Lesson Worth Taking

The hype cycle around AI in payments will continue. New vendors will pitch new capabilities. None of it will matter if the underlying architecture cannot let the AI act.

The merchants who move the revenue needle in 2026 will not be the ones with the most AI tools. They will be the ones who put AI in a position to make decisions and live with the consequences. That requires unified data, execution authority, and a feedback loop. Those three requirements are architectural, not algorithmic.

The 95% spent on pilots that did not move. The 5% spent on infrastructure that let the AI do its job.

See What Closed-Loop AI in Payments Looks Like

If your team has deployed AI tools that are not yet moving the metrics that matter, the issue is probably not the model. It is where the model is sitting.

Learn more about how DEUNA approaches agentic payments intelligence, or request a demo to see what AI looks like when it can finally act on what it sees.

Voltar ao blog
Editor’s Picks
The Strategic Advantages of Operating With a Unified Payments Platform
Learn More
The Protocol Wars Are Not Over. Visa Just Changed the Game.
Learn More
The Future of Payment Orchestration: From Transaction Processing to Intelligent Infrastructure
Learn More