Voice of the Industry

The rise of synthetic identity fraud: a call to action for financial institutions

Friday 11 April 2025 08:57 CET | Editor: Irina Ionescu | Voice of the industry

Dheeraj Maken, Practice Director at Everest Group, discusses the surge in synthetic identity fraud and what financial institutions in the US can do to spot and deter it. 


In an industry driven by trust and risk management, digital identity is pivotal, particularly with the surge in digital transaction volumes. With global digital identity solutions expected to witness significant growth through 2030, the urgency to secure digital onboarding and authentication has never been greater.
 

Market size - global digital identity solutions (USD billions; 2023-2030 forecast)


Source: Everest Group research

 

As organisations increasingly move toward digital transformation, the importance of verifying and managing user identities has become paramount to combat rising cyber threats, fraud/scams, and data breaches.

Understanding synthetic identity fraud

Synthetic identity fraud has emerged as a formidable challenge within the financial landscape, often eluding traditional detection mechanisms and resulting in substantial economic losses. The US Federal Reserve defines this as the fusion of real data (like Social Security numbers) with fabricated information to create entirely new identities. These identities often go undetected, gradually building credibility before being exploited for major financial gain. As per a recent report from  credit reporting agency TransUnion, in the first half of 2024, US-based lenders faced an all-time high risk exposure of USD 3.2 billion due to synthetic identity fraud, marking a 7% increase compared to the same period in 2023. This surge was primarily observed in auto loans, bank credit cards, retail credit cards, and unsecured personal loans.

Unlike fully fake profiles, synthetic identities incorporate enough real data to bypass traditional fraud detection systems. This sophistication makes them particularly insidious and dangerous.


The deepfake factor: Generative AI’s dark side

Advancements in generative artificial intelligence (GenAI) and deepfake technologies have exacerbated the challenges of detecting synthetic identities. Fraudsters can now produce highly convincing fake documents and realistic digital personas, complicating the verification processes for financial institutions. The fusion of behavioural, predictive, and generative analytics has become essential to stay ahead of such evolving threats.

Case in point: real-world fallout

The widespread use of credit cards in the US contributes to the higher incidence of stolen identity fraud compared to regions where debit cards are more prevalent. Credit cards often involve higher credit limits and more frequent transactions, presenting attractive targets for fraudsters. Additionally, the US payment ecosystem's complexity and the slower adoption of advanced authentication technologies have historically provided more opportunities for fraudulent activities.

In one case, a Georgia man was sentenced to over seven years for using children's Social Security numbers to build synthetic profiles and defraud banks of nearly USD 2 million. More recently, Charlie Javice, the founder of a college financial aid startup acquired by JPMorgan Chase for USD 175 million, was found guilty of defrauding the bank by fabricating a list of over four million student users using synthetic data. The high-profile case underscores how synthetic information, even when used to inflate a startup’s value rather than exploit credit, can lead to devastating financial and reputational consequences.


Red Flags for Unmasking Synthetic Identity Fraudsters

Spotting a synthetic identity can prove difficult for the untrained eye, so financial institutions and consumers should be vigilant for indicators of synthetic identity fraud, including:

  • Inconsistent personal information: discrepancies in personal details across different accounts or applications;

  • Multiple identities linked to a single contact point: several identities associated with the same phone number or email address;

  • Unusual credit activity: rapid establishment of credit followed by significant transactions or cash advances;

  • Mismatched identification documents: identification documents that do not align with other provided information or that appear altered;

  • Use of mule accounts: accounts being used to receive, hold, or transfer illicit funds, often controlled by fraudsters using synthetic identities to obscure the true origin of the funds.


Mitigation measures adopted by the US institutions

To combat the growing menace of synthetic identity fraud, American financial institutions and regulatory bodies have implemented several strategies:

  • Enhanced identity verification protocols: AI-driven anomaly detection systems to strengthen identity verification processes and detect fraudulent activities more effectively.

  • Collaboration and information sharing: sharing information about synthetic identity fraud patterns to identify and prevent cross-institutional fraud. The US Federal Reserve has also released a Synthetic Identity Fraud Mitigation Toolkit to provide resources and best practices for combating this type of fraud. 

  • Regulatory measures: regulatory bodies are issuing alerts and guidelines to help institutions recognise and mitigate risks associated with deepfake and AI-generated media. Few instances include the Financial Crimes Enforcement Network (FinCEN) – deepfake alert (Oct 2023), the Federal Trade Commission (FTC) – multiple consumer and business alerts on AI and identity theft guidance, the National Institute of Standards and Technology (NIST) – Digital identity guidelines (SP 800-63), etc.


From threat to opportunity

Ultimately, the fight against synthetic identity fraud isn’t just about defense but it is an opportunity to lead through innovation. Financial institutions that embrace AI-powered fraud orchestration, real-time analytics, and cross-industry collaboration will not only safeguard their operations but also reinforce consumer trust. By shifting AI technologies from a risk into a core defense asset, banks can better navigate an increasingly complex digital identity landscape.


About Dheeraj Maken

Dheeraj Maken is a Practice Director at Everest Group, leading the firm's Banking and Financial Services Business Process Services programme. With over 13 years of experience in the IT/ITES industry, he has worked with multiple global consulting and technology firms. Before joining Everest Group, Dheeraj held key consulting roles at Accenture Strategy, Wipro BPS, and TCS, contributing to their BFSI, Telecom, and IT practices.

 


About Everest Group

Everest Group is a leading global research firm helping business leaders make confident decisions. We guide clients through today’s market challenges and strengthen their strategies by applying contextualised problem-solving to their unique situations. This drives maximised operational and financial performance and transformative experiences. Our deep expertise and tenacious research focused on technology, business processes, and engineering through the lenses of talent, sustainability, and sourcing deliver precise and action-oriented guidance. Find further details and in-depth content at www.everestgrp.com.


Free Headlines in your E-mail

Every day we send out a free e-mail with the most important headlines of the last 24 hours.

Subscribe now

Keywords: synthetic identity, fraud detection, online fraud, fraud management, identity theft, identity verification, identity fraud, deep fake, regulation, money laundering, money transfer, credit card, credit card fraud, artificial intelligence, generative AI, GenAI
Categories: Fraud & Financial Crime
Companies: Everest Group
Countries: United States
This article is part of category

Fraud & Financial Crime

Everest Group

|
Discover all the Company news on Everest Group and other articles related to Everest Group in The Paypers News, Reports, and insights on the payments and fintech industry: