Interview

Fighting financial fraud in the AI era: an interview with Anis Ahmed of The Fraud Fellas

Wednesday 18 June 2025 08:00 CET | Editor: Mirela Ciobanu | Interview

In this interview, Anis Ahmed, an antifraud expert and the Founder & Host of The Fraud Fellas, shares expert insights into the rapidly evolving financial fraud landscape.


We explore the latest trends—from real-time payments fraud to deepfake-enabled attacks—and the technologies and strategies that financial institutions must adopt to keep pace. With global fraud schemes becoming increasingly sophisticated, this piece highlights not only the threats but also the opportunities for collaboration, regulation, and innovation in fraud prevention.

 

Last time, you discussed the growing fraud risks associated with RTP. How has the landscape evolved since then?

Since that article, the pace of real-time payments adoption has only accelerated, and so has the fraud. Attackers are now blending methods: combining social engineering with account takeovers and deepfake-enabled identity fraud to exploit the instant, irrevocable nature of RTP. Faster payments mean less time for detection and intervention, so the pressure on fraud defences has increased exponentially. Statistics show that:

  • Transaction volume growth is approximately 15% to 20%

  • Dollar value growth is approximately 15% to 20%

  • There is no data currently to pinpoint an exact percentage of the increase in fraud, but it is anticipated to have outgrown the RTP growth itself.

While not specific to RTP, social media firms like Google and Meta have introduced AI-powered scam detection and alert tools for consumers. Additionally, countries such as Australia, the UK, Singapore, Kazakhstan, and India have implemented frameworks, legislative measures, tools, enhanced reporting, and data-sharing initiatives to combat scams across digital platforms, banking, and telecom sectors.

 

Are industry stakeholders making progress in building the multi-layered fraud prevention frameworks you emphasised?

Yes, there’s clear progress, but it’s a constant race. Financial institutions are investing in AI/ML fraud detection that goes beyond static rules, using real-time data analysis, device intelligence, and behavioural biometrics to detect anomalies with minimal friction. Data enrichment from external sources is also boosting accuracy.

Regulatory pressure is further incentivising institutions to strengthen preventative measures at both ends of a transaction. The recent fraud report from UK Finance suggests that APP fraud in the UK is down by 2% due to new mandatory reimbursement rules.

Still, challenges remain; siloed data limits visibility on financial crime, and balancing strong security with user experience is tough, especially as fraudsters continue to evolve rapidly.

 

Have you observed more collaboration or intelligence-sharing across the ecosystem?

There has been a noticeable improvement. Forums like the Emerging Payments Association and initiatives such as the Global Anti-Scam Alliance have helped drive cross-industry collaboration. In addition, several institutions are building their own consortia networks. However, intelligence sharing is still often hindered by regulatory concerns and competitive hesitation. We need to treat fraud as a shared risk, not a proprietary problem. To make meaningful progress, greater global coordination and stronger public-private partnerships are essential.

 

Let’s talk about the Role of Regulation. With the October deadline approaching, how effective has Confirmation of Payee (CoP) been so far in reducing fraud in RTP?

CoP is a great step forward, particularly in reducing misdirected payments and APP fraud. It gives consumers a moment to pause and reconsider. But CoP is not foolproof — fraudsters adapt by social engineering the context around the payment to make the name match less relevant. CoP helps, but it’s not a silver bullet. It must be combined with customer education, transaction monitoring, and strong identity controls.

 

Cifas Fraudscape 2025 reports a 1,055% surge in unauthorised SIM swap cases in the UK. Is this just a UK issue or part of a wider trend?

It’s absolutely a global trend. While the UK stats are shocking, we’re seeing spikes in the Middle East, parts of Asia, and even in countries with strong telco regulation. The common denominator is the gap between identity verification and telco processes.

 

What’s the goal of SIM swap fraud, and how are attackers pulling it off?

The main goal is to intercept one-time passcodes (OTPs) and hijack accounts, particularly those linked to banks, crypto platforms, and high-value wallets. Attackers either social engineer telco employees or exploit weak online processes to port the victim’s number to a SIM they control. Since many banks and digital wallets still rely on SMS-based two-factor authentication (2FA), fraudsters gain full access once the SIM is compromised.

What preventive measures should banks and telcos prioritise?

  • Banks: reduce reliance on SMS OTPs; use phishing-resistant MFA, including passkeys, biometrics. Monitor SIM swap signals from mobile networks.

  • Telcos: enforce stricter in-store and remote identity checks. Flag and delay SIM swaps on high-risk numbers.

  • Technologies: there are technologies available today that can help prevent SIM swap fraud.

Cross-industry data sharing is key, fraud alerts from telcos need to flow in real-time to FIs.

Social engineering and online romance scams

Why are romance scams so effective and increasingly widespread?

Because they exploit emotion, not logic. Victims are groomed over time, trust is carefully established, and by the time an ‘emergency’ arises, they’re already emotionally invested and committed. The shift to digital-only relationships, especially post-pandemic, has made it easier for fraudsters to operate. The social fabric is broken; many people feel isolated and alone, and now interact more digitally than physically, creating the perfect environment for manipulation.

 

What other social engineering scams should the industry be watching?

Most these scams can use sophisticated technologies like AI-enabled deepfakes and synthetic identities.

  • Impersonation scams (family/friend-in-need, CEO fraud);

  • Investment scams using fake trading platforms;

  • Job scams preying on economic instability;

  • Pig butchering scams – long con investment frauds that bleed victims dry gradually.


How are fraudsters using AI-generated voice/video to bypass controls?

Fraudsters are using AI-generated voice and video (deepfakes) to convincingly mimic real people, making scams harder to detect and bypassing security controls.

Common use cases include:

  • Synthetic onboarding: deepfakes used to pass eKYC and open accounts with fake or stolen IDs.

  • Voice impersonation: AI-mimicked voices deceive call centres to reset credentials or approve fraud.

  • Video call scams: impersonating trusted figures to trick victims into sending money or data.

  • Biometric bypass: deepfakes fool facial and voice recognition, especially with weak liveness checks.

  • BEC 2.0: deepfake audio/video enhances fake emails, making scams more believable.


Can you share examples or scenarios where this type of fraud is being executed?

Case 1: In early 2024, a global design and engineering firm, known for icons like the Sydney Opera House and Bird’s Nest Stadium, fell victim to a deepfake scam. An employee at the Hong Kong office received a phishing email about a confidential transaction. When they asked for verification, the fraudsters used AI-generated deepfake videos and voices to impersonate the CFO and other colleagues on a video call. Convinced of its legitimacy, the employee proceeded to make 15 wire transfers totalling USD 25.6 million.

Case 2: In 2025, a French woman was scammed out of EUR 830,000 by fraudsters using AI-generated images to impersonate Brad Pitt in a fake online romance. The deception lasted 18 months, exploiting emotional trust and fabricated medical emergencies.

These aren’t theoretical - there are real-world cases already reported, highlighting the rising threat of AI-driven attacks and the urgent need for stronger digital literacy and public awareness, verification protocols, and deepfake detection.

 

How are banks and fintechs using AI/ML to fight fraud?

AI and machine learning are revolutionising fraud prevention by enabling faster, smarter, and more adaptive defences. Key applications include:

  • Detecting anomalies in real-time at scale;

  • Building dynamic customer risk profiles;

  • Automated case investigation and alert triage;

  • Predictive modelling to anticipate emerging fraud patterns;

  • Behavioural biometrics to detect imposters in real time.


What techniques or tools are showing real promise?

  • Graph-based link analysis to detect fraud rings;

  • Federated learning for sharing intelligence without exposing sensitive data;

  • Natural language processing to analyse social engineering attempts in chats and calls;

  • Generative AI for training simulations and fraud scenario planning;

  • Device intelligence and telemetry to flag suspicious access patterns;

  • Adaptive authentication that adjusts based on real-time risk signals.


Based on your experience as an anti-fraud expert, what’s one underrated trend or blind spot you think more people should pay attention to?

We’ve become overly focused on preventing data breaches, pouring millions into securing systems while overlooking a hard truth: breaches are inevitable. Given the complexity and interconnectedness of today’s digital ecosystem, perfect security is a myth.

The real blind spot is our ongoing reliance on knowledge-based data - passwords, SSNs, OTPs, mother’s maiden names, for identity and authentication. Once stolen, this data is easily reused for account takeovers, synthetic IDs, and more.

It’s time to change the narrative - shifting from breach prevention to breach resilience. We need to invest in technologies that make stolen data useless, such as passkeys, behavioural biometrics, possession and inherent-based factors, and dynamic and continuous identity verification.

 

By changing the narrative, we move from treating symptoms to addressing the root causes of modern fraud.

 

About Anis Ahmed

Anis Ahmed is a renowned anti-fraud expert and the Founder of a digital identity and anti-fraud startup. With over 25 years of experience in anti-fraud, financial crimes, and corporate investigations, he's actively involved in the global fight against financial crime.

He's also the founder and host of ‘The Fraud Fellas’, a forum dedicated to discussing fraud and its societal impact, and leads the MENA Chapter of ACFCS.



Free Headlines in your E-mail

Every day we send out a free e-mail with the most important headlines of the last 24 hours.

Subscribe now

Keywords: fraud management, fraud prevention, artificial intelligence, machine learning, romantic scam, scam, identity theft, deep fake
Categories: Fraud & Financial Crime
Companies:
Countries: World
This article is part of category

Fraud & Financial Crime






Industry Events