Generative AI (GenAI) has been the defining innovation in 2023, and its impact is seen in nearly all aspects of the technological world. While GenAI introduced many benefits, businesses and consumers are exponentially more at risk of fraud. Sophisticated and novice fraudsters alike are equipped with the GenAI tools to scale and increase the quality of their activities at an unprecedented rate. Attacks that payment and fraud professionals know well are multiplying with better precision and efficiency due to GenAI. From social engineering and identity theft to credit card attacks and policy abuse, fraudsters are now more adept than ever.
GenAI is still new yet rapidly evolving. Fraud, payments, and risk teams must clearly understand GenAI’s capabilities and the tools needed to combat it going into 2024.
Generative AI’s ability to automate processes and generate new, highly realistic content is of primary concern to fraud fighters.
Scripts and codes can be generated instantly with GenAI, giving fraudsters the necessary tools to automate their attacks. This means that even complex attacks can be executed with greater efficiency without manual intervention. Experienced programmers are no longer required to implement sophisticated attacks. Now, fraudsters of all levels can run powerful attacks in the background, having received end-to-end codes.
GenAI leverages large databases to create hyper-realistic content that deceives humans and computers. Previously, fake content could be identified by finding mistakes or simply by the content appearing unauthentic. Fraudsters can now feed GenAI with available consumer data online, such as social media profiles and information received through hacking. These capabilities offer fraudsters powerful tools that allow them to scale, amplify, and enhance their attacks.
We are witnessing a surge in the quality and quantity of fraudulent attacks. According to the Federal Trade Commission, in 2022, we saw 2.4 million fraud reports, resulting in USD 8.8 billion in reported loss. Specifically, socially engineered attacks, account takeovers, policy abuse, and identity fraud are increasing dramatically.
By prompting GenAI, fraudsters can create highly convincing and personalised phishing emails, help desk scam SMS messages, CEO fraud emails, and detailed scripts for phone scams. Fraudsters feed AI to craft tailored, credible messages that manipulate victims into divulging confidential information or performing actions that benefit them.
Consumer identities can easily be stolen or created with Generative AI. For example, liveliness tests, such as KYC checks and other identity verification processes, can be bypassed by replicating physical or behavioural responses to fool systems. Additionally, synthetic IDs can easily be created by automatically combining real and fake information to create a new identity, such as pairing a stolen Social Security number with a fake realistic image or video.
Fraudsters use Large Language Model-based tools (LLMs) to engage in text-based conversations directly with real customer service representatives. Highly realistic conversations can be carried out with representatives to exploit policies like items not received or false purchase claims. Additionally, referral or loyalty promotions can be exploited by creating highly credible and almost undistinguishable fake accounts.
Brute force attacks take over consumer accounts through trial and error. GenAI can do this within minutes, hijacking hundreds of accounts on numerous sites. Furthermore, with breached credentials available on the dark web, GenAI can enter such credentials automatically and gain access to customer accounts even more efficiently.
Looking toward 2024, the landscape of Generative AI fraud will continue to evolve and pose ever-growing challenges for individuals and businesses. As AI technology advances, so will the sophistication of fraudsters and their attacks. Identity harvesting, impersonation, and account takeover tactics will only increase as GenAI becomes more powerful. Fraudsters will be able to speed and scale their attacks with a decreasingly slight learning curve for the most complex scams.
To combat the growing threat of GenAI fraud, merchants must take steps to ensure that their customers are legitimate and not dangerous fraudsters. One way to achieve this is by relying on first-party data that cannot be altered, even by advanced technologies like GenAI. Attributes such as physical addresses, emails, and phone numbers with a clear digital footprint and historical activity will remain unchanged in the GenAI era. Leveraging that businesses have already validated these attributes makes them – in the correct context and combination – ideal for validating the identity of customers at scale. Learn how Identiq leverages first-party data.
This editorial is part of The Paypers' Fraud Prevention in Ecommerce Report 2023-2024, the ultimate source of knowledge that delves into the world of fraud prevention, revealing the most effective security methods for companies to stay one step away from bad actors and secure their businesses.
Uri Arad, Identiq’s co-founder and CTO, has been fighting fraud for over a decade, witnessing fraud and identity challenges from the perspectives of product, risk, and R&D. Previously, Uri was Head of Analytics and Research at PayPal’s risk department.
Identiq is a private network for identity validation that empowers companies to safely collaborate with each other to validate trusted customers – without sharing any sensitive data or identifiable information. Our peer-to-peer technology helps some of the world’s largest companies to identify good customers, fight fraud, and offer better digital experiences.
Every day we send out a free e-mail with the most important headlines of the last 24 hours.
Subscribe now