The growing accessibility of AI technology for creating deepfakes makes the risks mount and pose a challenge for businesses and individuals alike, with 37% of organisations having experienced deepfake voice fraud, and 29% having fallen victim to deepfake videos.
As AI can be leveraged to create increasingly realistic and convincing deepfakes, this makes it more difficult to distinguish between genuine and manipulated content, with Regula’s survey finding that fake biometric artefacts like deepfake voice or video are perceived to be real threats by 80% of companies worldwide. Businesses in the US predominantly seem to have a higher concern, with approximately 91% of organisations considering it a growing threat.
Ihar Kliashchou, Chief Technology Officer at Regula stated although neural networks could be useful in deepfakes’ detection, they should be leveraged in conjunction with other anti-fraud measures focused on physical/dynamic parameters like face liveness checks, document liveness checks via optically variable security elements, etc. Per their statement, as of now, deepfake creation that displays expected dynamic behaviour is difficult, so verifying an object’s liveness can give ‘an edge’ over fraudsters. What is more, cross-validating user information with biometric checks and recent transaction checks is thought to help ensure a thorough verification process.
Photo Source: Regula
Concomitantly, advanced identity fraud is not about AI-generated fakes only, as the survey has found that almost half of the organisations worldwide (46%) experienced synthetic identity fraud in 2022. Also known as ‘Frankenstein’ identity, this scam type has criminals combine real and fake ID information to create entirely new and artificial identities, and it is used predominantly to open bank accounts or make fraudulent purchases, which makes the banking sector the most vulnerable to this kind of identity fraud. The press release advises that almost all companies within the industry (92%) that were surveyed by Regula see synthetic fraud as a real threat. And approximately half (49%) have come across the scam recently.
To prevent most current identity fraud, companies should have sophisticated document verification enabled in addition to extensive biometric checks, with the following tools believed to be crucial to be included in their arsenal:
Thorough ID verification. Extended document verification should be enabled when proving identity remotely, and a company should be able to establish an extensive range of authenticity checks containing all the security features in IDs. In a zero-trust-to-mobile scenario with NFC-based verification of electronic documents, chip authenticity verification can be done on the server side, now the most secure way to prove a document is genuine. For international businesses, leveraging an extensive document template database with a wide range of templates from multiple regions helps them validate and authenticate almost any identity document, be that on-site or remotely, thus preventing fraud and mitigating security risks.
Biometric verification, an indispensable part of the process that consists of robust liveness verification to prove that no malefactor presents non-live imagery during the check, be that a mask, printed image, or digital photo. What is more, biometric verification solutions should match selfies with the person’s ID portrait and database organisations utilise to ensure their validity. To make sure that fraudsters can’t reuse users’ liveness sessions, the enrolment process for each company’s requirements should have unique parameters, and the solution should support extensive attributes to bind a photo, like a name, age, and gender, to increase reliability and security.
Overall, as of now, an effective identity verification process consists of a combination of techniques, alongside an extensive scope of cross-validations of information and attributes of a user.
Every day we send out a free e-mail with the most important headlines of the last 24 hours.
Subscribe now