News

FinCEN warns financial institutions of deepfake media fraud schemes

Friday 15 November 2024 10:45 CET | News

The US Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) has proceeded with issuing a warning to support financial institutions in identifying fraud schemes involving deepfake media. 

The alert provided by FinCEN, which comes from an analysis of the Bank Secrecy Act (BSA),  open-source reporting, and information received from law enforcement, focuses on fraud schemes associated with the use of deepfake media developed with generative artificial intelligence (GenAI) tools, offering an explanation regarding the typologies linked with these organisations. In addition, the alert delivers red flag indicators to support the identification and reporting of suspicious activity and underlines financial institutions’ reporting requirements under the BSA.

FinCEN warns financial institutions of deepfake media fraud schemes

Moreover, the move comes as part of the US Department of the Treasury’s effort to offer financial institutions the necessary details on the opportunities and complexities that are linked with the use of artificial intelligence. Officials from FinCEN underlined that even if GenAI has potential as an evolving technology, fraudsters look into exploiting it to conduct fraudulent activities on US-based businesses and consumers. The bureau urged financial institutions to maintain vigilance when it comes to the use of deepfakes and to report any related suspect activities so that the US financial system and individuals located in the region can be safeguarded against the abuse of these tools. Abusing deepfake and GenAI media can lead to fraud and cybercrime, which currently represent FinCEN’s Anti-Money Laundering and Countering the Financing of Terrorism (AML/CFT) National Priorities.
 

Detecting and combatting deepfake media

Since 2023, FinCEN has identified a substantial increase in suspicious activity reporting related to the use of deepfake media in fraud schemes focusing on financial institutions and their customers. As part of these schemes, criminals mostly modified or developed fraudulent identity documents to bypass identity verification processes and authentication methods. Additionally, these fraud schemes were comprised of online scams and consumer fraud, including check fraud, credit card fraud, authorised push payment fraud, loan fraud, and unemployment fraud. At the same time, bad actors opened fraudulent accounts leveraging GenAI-developed identity documents and utilised them as funnel accounts.

In most cases, FinCEN found that financial institutions detect GenAI and synthetic content in identity documents by re-reviewing a customer’s account opening documents. When conducting investigations on a suspected deepfake image, reverse searches and other open-source methods can reveal if an identity photo matches one in an online gallery of faces created using GenAI. Furthermore, financial institutions and third-party providers of identity verification solutions can leverage more technically advanced methods to recognise potential deepfakes, including analysing an image’s metadata or utilising software to detect possible deepfakes or manipulations.


Source: Link


Free Headlines in your E-mail

Every day we send out a free e-mail with the most important headlines of the last 24 hours.

Subscribe now

Keywords: deep fake, AML, fraud management, fraud detection, generative AI, artificial intelligence
Categories: Fraud & Financial Crime
Companies: FinCEN
Countries: United States
This article is part of category

Fraud & Financial Crime

FinCEN

|
Discover all the Company news on FinCEN and other articles related to FinCEN in The Paypers News, Reports, and insights on the payments and fintech industry:





Industry Events