DataVisor has indentified a structural gap between AI-driven fraud threats and financial institutions' defensive capabilities in newly launched report.
Following this announcement, DataVisor has published its 2026 Fraud & AML Executive Report, identifying a structural disconnect between the growing sophistication of AI-driven fraud and the capacity of financial institutions to counter it.
The report, based on surveys of senior fraud and anti-money laundering (AML) leaders across banks, credit unions, fintechs, and digital payments platforms, found that 74% of respondents cite AI-driven fraud as a primary threat. Yet 67% report that their organisations lack the infrastructure required to deploy effective AI-based defences. DataVisor describes this disconnect as the 'AI Readiness Gap'.
Legacy systems and fragmented data among key obstacles
According to the official press release, the findings point to legacy infrastructure, organisational silos, and outdated operating models as the principal barriers slowing institutional response to evolving fraud threats. As generative AI enables more sophisticated attack methods, including deepfakes, synthetic identities, coordinated fraud rings, and automated scam campaigns, many institutions remain constrained by fragmented data environments and detection models that were not designed for the current threat landscape.
At the same time, the report indicates that financial institutions are actively working to address these shortcomings. Some 81% of surveyed organisations are considering or implementing a unified approach to fraud and AML operations, and 74% say that achieving a single, comprehensive view of risk would materially improve detection effectiveness. These figures reflect a growing recognition that integrated intelligence and unified workflows are increasingly necessary to counter coordinated and adaptive attacks.
The report also highlights a shift in how executives prioritise AI applications within fraud and AML functions. Historically, AI has been associated primarily with detection and scoring. However, the survey suggests that improving investigation workflows is now viewed as equally impactful, if not more. Some 50% of executives ranked investigator assistance as the top AI use case, ahead of detection and scoring at 44%.
Operational efficiency and real-time response
Beyond detection, the report addresses how AI can support operational decision-making throughout the fraud management lifecycle. Large language model (LLM)-based AI agents are also reported to reduce false positives further by 42% through rule optimisation, while AI-assisted alert reviews and suspicious activity report (SAR) narrative generation are said to reduce review time by up to 60% and increase SAR filing efficiency by up to 90%.
Moreover, the report also examines how structural changes to payments infrastructure are increasing pressure on fraud and AML teams. The expansion of real-time payments, faster digital onboarding, and more diversified customer interaction channels are shortening the available window for fraud detection and intervention. This, the report argues, places greater urgency on institutions to modernise their operating models and data foundations.
The 2026 Fraud & AML Executive Report also provides guidance on how organisations can unify fraud and AML operations, strengthen data infrastructure, and operationalise AI capabilities across detection, investigation, and reporting workflows.