Socrates declared ‘the unexamined life is not worth living’. While that may not be true for human life, it certainly makes sense when applied to artificial intelligence (AI). Take machine learning for fraud risk, for instance. The industry standard today is a black box approach, involving the machine hiding its internal logic from users, auditors, and regulators. Why was a transaction flagged? How did the system generate the risk score? What adjustments should be made to optimise performance?
It’s almost impossible to answer these questions without understanding why the machine or the model produced those results, and a black box system isn’t capable of providing those insights. Black box machine learning models are ‘computer says no’ systems that annoy customers, baffle domain experts, and ultimately stifle growth by increasing client churn.
Explainable AI: operationalising fraud protection and detection
While it’s great to have a machine that learns, a machine that teaches propels us into the future of fraud detection and prevention. And that’s precisely what Explainable AI (XAI) is: a machine that teaches by providing insights to teams.
In a nutshell, XAI accurately explains the individual predictions the machine makes. This means when the machine determines a risk score, an organisation can visually see why and how the machine came to that decision. What’s more, the explanations come in the form of a few sentences in simple human language (not a programming language). Consider the usefulness of teams and individuals across your organisation – not just the data scientists – quickly understanding why the system raised or didn’t raise an alert. The context XAI provides to fraud analysts and auditors is a game-changer. However, it’s worth understanding the backend process – the transparency on how the machine thinks and the value those insights provide – to experience the full value of XAI.
Insights increase performance
XAI delivers insights across teams by providing an understanding as to why models are performing or not performing, and this impacts the accuracy of a company’s overall fraud detection. Without this valuable insight, teams cannot be expected to know how to improve fraud detection rates. XAI delivers on these promises through two main components: feature importance (which is technically related to XAI), and model evaluation and performance.
Feature importance
Feature importance is part of the holistic view that comes with XAI. On an elementary level, features look at specific data such as payment method, transaction amount, location, and so forth, and average that over a particular time. An example of a feature would be the average dollar amount withdrawn from a checking account per month.
XAI ranks features by importance. For example, look at the three features below:
In this example, the model ranked the feature called ‘the average amount of credit card charges per month’ as the most important feature. Teams can see this prioritisation and make adjustments to it to improve overall fraud detection and prevention. This leads to the next critical component of XAI, model evaluation and comparison.
Model evaluation & comparison
Model evaluation allows teams to test model performance before deploying the model. Testing the performance of a machine learning model benefits organisations because they can see both how accurate a model is and what the false positive rate is prior to relying on the model for fraud detection and prevention. If the false positive rates don’t meet the organisation’s goals, the team adjusts the model until they achieve premium performance.
Model comparison, as the name implies, provides a side-by-side view of how different models perform when compared to each other. With this information, teams select the best fraud detection models to put into production.
Putting it all together
Wisdom ranked number one in feature performance for Socrates, and so he chose death over the unexamined life. The ancients were dramatic, and luckily, fraud detection and prevention isn’t a Greek tragedy. Still, fraud isn’t a victimless crime. It costs billions and has the potential to devastate individuals and organisations. We owe it to our customers, employees, and institutions to move away from standard AI with its cloaked, black box secrecy and move toward XAI, designed with complete transparency.
This editorial was first published in the Fraud Prevention and Online Authentication Report 2019/2020. The Guide covers some of the security challenges encountered in the ecommerce and banking, and financial services ecosystems. Moreover, it provides payment and fraud and risk management professionals with a series of insightful perspectives on key aspects, such as fraud management, identity verification, online authentication, and regulation.
About Pedro Bizarro
Pedro Bizarro is co-founder and Chief Science Officer of Feedzai. Drawing on a history in academia and research, Pedro has turned his technical expertise into entrepreneurial success as he has helped to develop Feedzai’s industry-leading artificial intelligence platform to fight fraud. Pedro has been an official member of the Forbes Technology Council, a visiting professor at Carnegie Mellon University, a Fulbright Fellow, and has worked with CERN, the European Organization for Nuclear Research. Pedro holds a Computer Science PhD from the University of Wisconsin-Madison.
About Feedzai
Every day we send out a free e-mail with the most important headlines of the last 24 hours.
Subscribe now