Facial matching systems work by comparing a person’s face against either a photo from an identity card or a previous image. However, facial matching systems are vulnerable to spoofing attacks, such as a fraudster using a photo, video, or mask to pretend to be the actual person. To solve this problem, facial matching systems required some active step by the user, whether smiling, moving lips, blinking, moving one’s nose to follow a dot on the screen, or moving the camera.
ID R&D’s IDLive Face requires nothing from the end user for liveness validation, as it operates entirely in an application’s background. AI-based algorithms power the solution.
The initial release runs on a server architecture and processes an image in near real-time on a typical server. Faster speeds are possible using GPUs. The algorithm works for both native mobile phone-based image capture as well as mobile and desktop web image capture. No special capture software is required. Accuracy rates for false positives and false negatives achieve levels that meet or exceed industry expectations as proven by initial customers.IDLive Face is available as an SDK and Docker container on Linux and Windows for easy integration with existing or new applications on Android, iOS, and web-based platforms. IDLive Face is also available in the company’s newest release of SafeChat, its zero-effort authentication solution for chatbots and virtual assistants.
Every day we send out a free e-mail with the most important headlines of the last 24 hours.
Subscribe now