Fraudsters are now more equipped than ever to operate on a large scale. Digital and social channels have become new hunting grounds. AI provides ample opportunities for scammers to automate, disguise, and refine their tactics to target vulnerable consumers.
In the UK, card ID theft is wiping out much of the progress made in reducing remote purchase fraud. Fraudsters exploit stolen card details and personal information, readily available on the dark web, to open or take over accounts. Phishing, pharming, and social media scraping are key methods driving the continuous surge in fraudulent activity.
There is also a rise in authorised push payment (APP) scams, particularly those fuelled by social engineering, emails, and fake app or web links. From government impersonations to investment fraud, romance, and delivery scams, criminals are becoming bolder and more creative. In 2023 alone, UK victims reported 252,429 cases of APP scams totalling almost GBP 459.7 million, causing significant financial and emotional harm.
Looking ahead, the looming threat of deepfake attacks using AI tools for voice and visuals to bypass biometrics or improve credibility in personal scams is likely to fuel even higher-value scams.
Not only are fraud incidents increasing in frequency and cost, but banks also face mounting pressure from governments and consumer organisations to address the issue. While fraud and scams often originate online or via telephone, banks are expected to reimburse victims and are under increasing scrutiny to identify recipient accounts for money laundering.
The UK is at the forefront of this fight, with new legislation to combat APP fraud taking effect on October 7, 2024. This law mandates shared liability between sending and receiving banks unless a customer is found to have acted fraudulently, or with gross negligence. Similar schemes are being considered globally, with the potential for voluntary collaboration and regulatory mandates.
Banks face more heavy lifting to prevent sophisticated synthetic fraud and comply with tighter regulations. They must strengthen their safeguards while providing frictionless experiences to retain and attract customers.
Many merchants are turning to next-generation AI tools to fight back. Machine learning has proven its worth, but it has limitations, especially when dealing with rapid change. Innovations like incremental learning, an ACI-patented machine learning capability, can help address these challenges. Incremental learning maintains and improves model performance. It has been shown to deliver up to a 20% increase in fraud detection rates, according to ACI internal data.
The proposed EU AI Act will require financial institutions to use AI for fraud detection to comply with stringent requirements for data management, security, and human oversight to protect consumer data.
Banks need a way to collectively utilise data responsibly to detect fraud without becoming anti-competitive or breaching customer confidentiality.
Fraudsters have long exploited banks' siloed nature to commit crimes undetected, taking advantage of the lack of data sharing between institutions. Although there are rules allowing data sharing to prevent crime, banks have historically been cautious, fearing regulatory violations.
ACI is at the forefront of enabling this type of cross-bank fraud detection collaboration. Pushing for regulatory changes while leveraging data and technology, our device intelligence and global device fingerprinting capabilities provide risk signals about potential fraud across billions of devices worldwide.
While technology investment is essential, collaboration is equally vital in combating fraud and scams. As new mandates emerge, banks and intermediaries must work with a partner who understands their specific needs, leveraging AI to optimise operational costs and shifting the heavy lifting.
Understanding digital identities is critical to reducing financial losses. Successful fraud management strategies include monitoring customer patterns and profiles to ensure the account attributes match across a comprehensive data consortium and verifying that no synthetic identities or mule accounts have been created.
Monitoring patterns and profiles, ensuring account attributes match across a comprehensive data consortium, and verifying no synthetic identities that could lead to mule accounts or financial losses.
Every day we send out a free e-mail with the most important headlines of the last 24 hours.
Subscribe now