Voice of the Industry

We must fight deepfakes now to prevent deep problems in the future

Tuesday 23 July 2019 07:37 CET | Editor: Melisande Mual | Voice of the industry

Andrew Bud, iProov’s CEO explains the evolution of deepfake technology and the challenges that lie ahead, as well as the role of deep learning in fighting this type of fraud

Over the past few months, the rapid spread of deepfake technology has become a hot topic in media, stoking the fires of ethical, moral and political debate. Speculation continues to grow about the technology’s impact not just on society as a whole, but also as to the impact it will have on the banking and payments sector specifically.

Increasingly, we continue to see the emergence of new conspiracy theories online, typically fuelled by bogus information and doctored photos. Until now, video was the one remaining source of truth. Deepfakes however, have flipped this notion on its head.

One major breakthrough came just a few months ago from a team of Samsung researchers who announced the creation of a system capable of constructing realistic deepfake video avatars from just a single image. Previously, a large dataset of photos and videos was required to produce an accurate replication. Now, with only a single image needed to create a lifelike video, we are seemingly entering the “post-truth” era.

Of course, editing faces in videos is nothing new. In fact, it’s been a common cinematic special effect, which sees actors being digitally recreated for films like The Matrix Reloaded and The Curious Case of Benjamin Button a decade ago. Characters like Gollum in The Lord of the Rings, for example, are now ubiquitous in twenty-first century film.

Although these were nascent concepts that required extensive expertise, time and significant budget, the technology use cases were relatively harmless as they were historically only for entertainment purposes.

In 2011, a team of computer scientists from Harvard were able to establish a method to replace faces in videos in a matter of hours, rather than weeks. Yet this technology needed the brains of highly educated individuals with PhDs from one of the world’s top universities to engineer it.

In under a decade, deep learning has revolutionised the ability to manipulate images and has replaced human skills. This produces deep convolutional neural networks that can rapidly produce eerily accurate fake imagery, which is believable to the naked eye. This concept has become what we know to be deepfakes today.

In the past, when bank customers wanted to open a new account or make a high value transaction, they’d need to go to the branch to verify their identity and authenticate the transaction. More recently we’ve seen many banks move to more remote means of making these checks. Techniques often involve biometrics and the user making a series of movements or sharing a short video.

The problem is that deepfakes undermine the very notion of trust in moving images. So, how then can banks and payment providers be sure that the person whose ID verification is coming through, is indeed the genuine article and not a digitally created copy / deepfake?

Adoption of biometric authentication is growing, which is certainly a positive step forward in beating fraud as it is continually proven to be the best means of detecting fraudulent identities. However, what banks and payment providers need to remember is that not all biometric systems work in the same way.

What deepfakes highlight so clearly is how important it is for biometric technology to be able to guarantee that a user is genuinely present, at the moment they are attempting to validate their identity. Not all can do this.

Through a liveness check, which uses the screen of the user’s device to illuminate the person’s face with a rapidly changing sequence of colours, we can capture and match the actual face of the person in a way that is spoof-resistant.

The challenge posed by deepfakes to the banking and payments industry shouldn’t be underestimated. Fortunately, however, the very technology behind deepfakes is also the technology that can contain the chaos they’ve been responsible for creating it. Security standards and attention to detail in this new “post-truth” era must be higher, tighter and more rigorous than ever. Genuine presence detection is at the next frontier of cybersecurity. Maintaining online trust is fragile, but we can prevent it from shattering by acting now.

About Andrew Bud

src=/images/andrew-bud.pngAndrew Bud is founder and CEO of the UK-based iProov. Andrew also chairs MEF, the global trade association of the mobile ecosystem, and is non-executive chair of digital energy business Passiv Systems. His earlier achievements include Europe’s first cordless data network and the mobile network of Omnitel Pronto Italia (now Vodafone Italia). He has a degree in Engineering from the University of Cambridge and is a Fellow of the IET.

About iProov

vspace=2Founded in 2011, iProov is a leader in spoof-resistant, biometric facial verification technology. Its technology is used by banks and governments around the world for secure customer onboarding, logon and authentication, to ensure new and returning users are genuine and to guard them against fraudulent attempts to gain access to personal data or use a stolen identity. The company has been recognised by many awards from organisations including SINET, Citi, the NCSC and KPMG. iProov has eight granted patents on its technology, which has been adopted by large organisations worldwide.

Free Headlines in your E-mail

Every day we send out a free e-mail with the most important headlines of the last 24 hours.

Subscribe now

Keywords: deepfake, iProov, Andrew Bud, biometrics, authentication, identity verification
Countries: World