
Vlad Macovei
16 Mar 2026 / 8 Min Read
Aivar Paul, Wallester’s Chief AML Officer and Board Member, shares insights on how AI can be both a problem and a solution in the fight against fraud in 2026.
Financial fraud is a problem that is anything but static. It's getting cheaper and faster to carry out, and harder to detect. What's interesting is that the same technology driving the shift is also the best tool for fighting it.
That's the position the financial services industry finds itself in as it moves through 2026. AI is simultaneously the sharpest weapon in a fraudster's arsenal, and the most powerful defence a bank or fintech can deploy. Still, the defence side of the equation doesn't work properly without one thing in place: clean, reliable identity data collected at the very start of a customer relationship. In other words, KYC (Know Your Customer) has become the critical link in the chain.
‘There’s a lot of noise around AI,’ says Aivar Paul, Wallester’s Chief AML Officer and Board Member. ‘Fear, assumptions, and big claims about what it will replace. But in financial crime, the key question is what it can strengthen. Criminal networks are already using AI to automate attacks, scale social engineering, and adapt faster than traditional controls. If fraudsters are using AI, financial institutions must use it too. AI is no longer a competitive advantage; it’s a necessity.’
According to AFP's 2025 Payments Fraud and Control Survey, four out of five organisations are hit by payment fraud attacks. In the UK, for example, fraud now accounts for roughly 40% of all recorded crime. These are not one-off incidents or isolated cases – far from it. They are a signal that financial crime has become industrialised.
A big part of why is AI. Fraudsters are using it to operate at scale. They automate scam conversations, switch languages on the fly, and change the approach in real time when something stops working. The barrier to entry has dropped significantly. Organised crime groups, once focused on drug trafficking and other high-risk activities, are moving into scams because the sentences are lighter and prosecution rates are lower.
In practice, what has emerged is a new division of labour. Typically, AI agents will now handle the early stages of a scam conversation – like building trust or assessing the target. Then, once the opportunity looks worthwhile, a human fraudster will step in. Not only is this approach efficient and scalable, but also exceptionally difficult to detect.
A recent industry poll confirmed that AI is now seen as the single most impactful external factor shaping the financial crime landscape in 2026, cited by 37% of respondents – ahead of increasingly sophisticated criminality (29%) and regulatory change (16%).
Unsurprisingly, the same technology is being deployed on the other side. Banks and fintechs are using AI to monitor transactions in real time, flag anomalies, and reduce the number of false positives that have long been a bottleneck in fraud detection. Instead of looking at each transaction in a vacuum, AI systems can pull together data from across a customer's profile: transaction history, device behaviour, adverse media, even patterns in how they interact with an app.
Juniper Research’s report, released in January 2026, identifies AI in fraud and security as one of the top three technology movers in the space for 2026, alongside tokenisation and civic identity applications. In other words, it is an established trend – not an emerging capability – and the foundation on which a lot of other things are being built.
But the problem is that AI is only as good as the data it is trained on. And right now, data quality is the biggest internal challenge financial institutions face. In the same industry poll, 41% of respondents identified data completeness and quality as their top concern looking into 2026, ahead of the limitations of existing technology tools (25%) and the cost of compliance (16%).
‘AI is only as effective as the data behind it,’ says Paul. ‘If your data is fragmented, incomplete, or stuck in silos, even the best models won’t deliver meaningful results. Data quality and accessibility aren’t a technical detail; they’re the foundation of effective AI-driven financial crime prevention.’
KYC processes have often been treated as a box-ticking exercise: get the documents, verify the identity, and move on. Both regulators and the industry are now correcting that assumption – and they’re not wasting their time.
Under the EU's Sixth Anti-Money Laundering Directive (6AMLD) and the incoming Payment Services Directive 3 (PSD3), KYC is no longer optional or loosely enforced. It is a tightly governed, tech-enabled obligation embedded directly into onboarding and payment flows. The EU's new Anti-Money Laundering Authority (AMLA), operational since July 2025, has the legal power to directly supervise high-risk entities. And a single EU AML rulebook, fully applicable from July 2027, will replace the patchwork of national rules with one unified set of requirements across member states.
The eIDAS 2.0 framework adds another layer. It requires every EU member state to provide a standardised digital identity wallet – reusable across public and private services. The idea is straightforward: verify someone properly once, and that verification travels with them.
While the instinct to throw AI at the problem might be understandable, deploying the new technology on top of weak KYC processes doesn't really fix that much. In fact, it might just automate the mistakes. The consensus, reinforced by both regulators and practitioners, is clear: get the foundations right first, and only then layer on the sophisticated tools.
Obviously, that means investing in identity verification at the point of onboarding, not as an afterthought. It also means training AI models on diverse, external data – not just internal transaction histories, which will most likely reproduce existing blind spots. Finally, it means treating real-time intervention as a priority, simply because catching fraud after the fact and reimbursing the customer does nothing to actually stop the crime.
Paul says 2026 must be the year of structured rollout – not experimentation without direction.
‘Start with the basics: understand whether your data is complete, consistent, explainable, and actually accessible across systems, because AI won’t fix weak data – it will amplify it. Then pick one focused use case where AI supports detection, triage, or analysis without replacing human judgment, and use it to learn. A sensible early win is using AI to surface gaps and anomalies in your own data, while you build the foundations.’
He adds that AI must be treated as a living system, not as a one-off deployment.
‘Models need continuous training, testing, governance, and oversight as fraud patterns evolve. And AI isn’t just large language models – anomaly detection, graph analytics, clustering, behavioural models, and rule-enhancing approaches can be faster and more explainable in regulated environments. Finally, most firms should start with fraud before AML: fraud cases are easier to label and confirm with precision, while AML demands a broader context and longer-term judgment. Build capability in fraud first, then extend it into AML.’
To sum up, the regulatory deadlines are tightening, the technology is moving faster than most organisations are prepared for, and fraudsters, of course, are not waiting. That's why KYC, data, and AI can't be seen as two separate conversations. They are the same one.
Matko Brusac is a senior copywriter at Wallester. He writes about payments, money, and how financial services are changing – from Embedded Finance and card issuing to everyday business spend. He previously worked in journalism and content marketing across European finance and fintech.

Aivar Paul is a leading Estonian Anti-Money Laundering (AML) expert with nearly 30 years of experience in law enforcement, banking, and regulation. Former Head of Estonia’s Financial Intelligence Unit and AML leader at Swedbank and LHV, he now leads ethics, governance, and financial integrity at Wallester.
The Paypers is a global hub for market insights, real-time news, expert interviews, and in-depth analyses and resources across payments, fintech, and the digital economy. We deliver reports, webinars, and commentary on key topics, including regulation, real-time payments, cross-border payments and ecommerce, digital identity, payment innovation and infrastructure, Open Banking, Embedded Finance, crypto, fraud and financial crime prevention, and more – all developed in collaboration with industry experts and leaders.
Current themes
No part of this site can be reproduced without explicit permission of The Paypers (v2.7).
Privacy Policy / Cookie Statement
Copyright