Mirela Ciobanu
17 Oct 2025 / 5 Min Read
With GenAI fuelling financial fraud, Irfan Faizullabhoy, Product Manager at Persona, urges fintech leaders to build collaborative cross-functional bridges within teams and operations, strengthen identity verification practices, and continuously adapt defence strategies.
Fraud is evolving rapidly with the rise of generative AI. GenAI gives bad actors new tools to create deepfakes, synthetic faces, and AI-driven face spoofs, making fraud rings nearly invisible to traditional detection methods.
The numbers underline the urgency: nearly one-third of identity fraud cases now involve synthetic identities; the UK government projects 8 million deepfakes will circulate in 2025 (up from 500,000 in 2023); and Deloitte estimates generative AI could drive USD 40 billion in fraud losses in the US by 2027.
New fraud often happens ‘at the edges’, blurring where it begins, how customers are affected, and who is liable. A scam may start on social media, escalate through a phone call, and end with money sent to a fraudulent account - leaving financial institutions to absorb the loss. As fraud evolves and the techniques evolve, all these roles have to collaborate more and share common goals. Each of these teams are facing increased pressure to stem fraud from different angles and they're looking to work with each other to fight off this emerging threat.
To counter this, Irfan Faizullabhoy from Persona, urges fintech leaders to break down silos within teams and operations, strengthen defences with identity verification, and continuously adapt their fraud strategies.
Generative AI tools are getting better at a surprisingly quick rate. From text (ChatGPT) and images (DALL·E, Midjourney) to video (Runway, Pika Labs, Sora 2), these tools can now generate hyper-realistic outputs that criminals are exploiting in fraud.
One fast-growing threat is the AI-based face spoof: manipulated or fabricated faces designed to bypass identity proofing and liveness checks. Deepfakes are the most notorious example, but far from the only one. Persona has identified over 50 distinct spoof types, ranging from face swaps and morphing to fully synthetic faces and avatars.
These spoofs are visually convincing (see image below) and can mimic lifelike movements, making it nearly impossible for humans to spot the difference between real and fake.
Broadly, they fall into two categories:
With spoofs becoming harder to detect, fintechs and digital platforms need smarter defences.
Here are three strategies to consider to fight GenAI fraud.
‘We need these cross-functional bridges because the most vulnerable organisations treat AI fraud as ‘someone else's problem’ but actually we need to break down silos between PM, IT, security, and data teams by establishing regular touchpoints and shared frameworks and metrics related to tracking and handling risk’, Irfan says.
Other best practices for bringing teams together include joint or cross-over OKRs and shared outcomes and metrics that matter to the business, such as protecting revenue, safeguarding users and customers, preserving brand reputation, and mitigating regulatory scrutiny.
‘With deepfakes and synthetic identities becoming harder to detect, now is the time to strengthen your identity verification strategy. Consider implementing multi-layered verification that combines document checks, behavioural analysis, and real-time risk signals’, Irfan continues.
This identity verification strategy requires you to collect active and passive risk signals, such as liveness detection and device fingerprints. Then, layer these types of signals to detect bad actors based on the selfie, links to fraudulent activity, and other suspicious tells.
The more information you collect and verify about a user, the more confidence you have that they are who they claim to be.
Identity verification systems assign a probability that an ID or face is genuine and belongs to the person submitting it. This risk rating can change as new AI-based spoofing techniques emerge. With that in mind, it’s useful to adjust the amount of evidence you request based on the risk that a submission might be fake.
To further thwart fraudsters, suspicious submissions can be routed to neutral screens (like a ‘thank you’ page) instead of failure messages, removing real-time feedback and making it harder for attackers to refine their methods.
That’s why we recommend businesses leverage ensemble models, which combine multiple algorithms, micromodels, and datasets to better help you evaluate the probability of whether data submitted by a user is real or fake — and whether it’s been presented to you in a legitimate or illegitimate way.
Don’t just analyse submissions individually; look for patterns across clusters of verification attempts (which we’ll go more into in Step 3). This lets you adjust friction in real time, adding checks like liveness gestures or a deterministic check like NFC verification when risk is high.
Two signals that can be particularly important when fighting AI-based spoofs are:
By combining visual, device, and behavioural data, you can catch sophisticated fraud at scale while keeping legitimate users flowing smoothly.
Some risk signals only appear when you examine submissions at scale. Link accounts using shared attributes like names, emails, payout addresses, or IPs to uncover fraud rings, repeated attacks, or novel techniques. By clustering these connections, you can detect coordinated activity that single-submission analysis would miss.
The clustering is important with GenAI because it's becoming increasingly hard to trust visual models alone to catch deepfakes. Deepfakes are convincing, and you need to layer your defences by looking to see if there are connections across accounts, especially linkages across passive signals (e.g., device fingerprint, IP address, email address, etc.).
Collecting more data increases assurance, but excessive checks frustrate legitimate users. To prevent this, we recommend using real-time risk signals, segmenting traffic, and adjusting verification dynamically. High-risk users can face additional measures like government ID checks, liveness gestures, or other step-up verifications, while low-risk users experience a seamless journey. This balances security and user experience effectively.
‘With the multitude of threats, a company’s defence strategy needs to adapt continuously’, says Persona’s product expert. ‘Look for solutions and partners that can adapt to new fraud patterns in real-time and provide your teams with actionable intelligence.’
Working with identity verification partners enables companies to design verification journeys that go beyond ID checks, ingest more signals, and maintain control over the identity lifecycle of their customers. A centralised library of signals and checks allows companies to collect information, deploy obstacles quickly, and iterate continuously to keep pace with evolving fraud. Tools for fraud investigation and link analysis also help reveal hidden connections across the user base, while adaptable solutions support any use case, whether that's KYC, KYB, or possible future use cases.
Branch, a platform enabling instant payments for workers and businesses, initially partnered with Persona to verify users during onboarding. However, the partnership quickly expanded as Branch leveraged Persona’s flexible and modern system to conduct ongoing verification throughout the user journey. For instance, when users request a name change, Branch uses Persona’s Document Verification to confirm authenticity - preventing account takeovers and ensuring continued trust and compliance.
Similarly, Bridge, a payments platform offering a stablecoin-backed debit product, worked with Persona to build a SAR filing system from scratch. This enabled Bridge’s fraud and AML teams to operate from a unified platform, focusing on high-risk cases rather than manual tasks. Lee Bagan, Bridge’s Director of Financial Crime, emphasises: ‘The true credit goes to the remarkable team Bridge surrounded me with. I’m incredibly proud of how hard we worked to ensure we got compliance right, holding ourselves to the highest standard every step of the way.’
By partnering with Persona, both companies gained actionable insights, streamlined workflows, and enhanced resilience against evolving fraud threats.
As fraud continues to evolve, accelerated by generative AI and increasingly sophisticated techniques, Irfan from Persona emphasises that businesses must stay ahead with proactive strategies. The recommended approach centres on three pillars: breaking down silos and joining cross-functional teams, deploying smarter identity verification to detect and deter threats, and building resilient, adaptive defence strategies through real-time signals and strong partnerships. By embracing these strategies, fintechs, payments platforms, and digital ecosystems can protect their customers, safeguard their brand, and stay agile in the face of rapidly advancing fraud.
About author
Mirela Ciobanu is Lead Editor at The Paypers, specialising in the Banking and Fintech domain. With a keen eye for industry trends, she is constantly on the lookout for the latest developments in digital assets, regtech, payment innovation, and fraud prevention. Mirela is particularly passionate about crypto, blockchain, DeFi, and fincrime investigations, and is a strong advocate for online data privacy and protection. As a skilled writer, Mirela strives to deliver accurate and informative insights to her readers, always in pursuit of the most compelling version of the truth. Connect with Mirela on LinkedIn or reach out via email at mirelac@thepaypers.com.
The Paypers is the Netherlands-based leading independent source of news and intelligence for professional in the global payment community.
The Paypers provides a wide range of news and analysis products aimed at keeping the ecommerce, fintech, and payment professionals informed about the latest developments in the industry.
Current themes
No part of this site can be reproduced without explicit permission of The Paypers (v2.7).
Privacy Policy / Cookie Statement
Copyright