Voice of the Industry

Enhancing AI explainability in the age of generative AI

Friday 7 July 2023 09:59 CET | Editor: Alin Popa | Voice of the industry

Discussing AI explainability, Dr Janet Bastiman from Napier emphasizes the importance of understanding model breakdowns, building trust, and providing plain language explanations.

 

Explainable AI has had a chequered past. From initial unfounded statements that modes can either be ‘explainable or accurate’ but not both through to being used by data scientists to check their own homework without proper validation, it’s often viewed as an afterthought or an annoyance of regulation with an importance solely on testing. What it actually offers is a means of understanding where and why models do break down, but most importantly it can build trust and confidence in our end users – the very people who have to live with the impact of the model’s decisions.

Who needs explanations and when?

Research by the Alan Turing Institute with the ICO in the UK indicated that you need to provide explanations by AI systems in the same scenarios that you would ask a human for an explanation. Typically, this is when we either don’t understand the information presented or when we disagree with it and need to be convinced.

As an implementor of AI models, you cannot know all these cases in advance – disagreement and lack of understanding are personal. Furthermore, in almost all use cases in finance and regulatory technologies, the need for an explanation is mandated. Any company using AI in a global setting will require an end user explanation.

It’s the end user that’s the most important here – we are not talking about mathematical or graph-based explanations fit only for the data scientists – it needs to be plain language suitable for the compliance officer or potentially a customer of the bank and with multiple levels of explanation to support different end user types. Unless your end user can understand your models output and its reasoning, you do not have explainable AI.

Complex models present challenges

The more complex the model, the more important the explanation. While ‘creative’ generative AIs such as Midjourney don’t need to explain to us how they made the art, there is an increased requirement to understand the sources used for the final artwork to ensure that copyright hasn’t been breached. ChatGPT delivers paragraphs that appear to be convincing but minor changes to the prompt can give different results that are simply stated as facts. Some of its explanations to citations are fake references that just follow the correct format – showing that there is a disconnect between fact and creation and no true explanation of where the information originated. Expand these high-profile issues to all the models currently in use and not only is there a lack of trust, but also a real need to ensure that all of these systems can be validated by their users.

So what are the options to make your systems explainable? It does vary based on the type of AI that you are using, but you have to start with the want to ensure your users are informed. Make explainability at the heart of any models you create as a deliverable and ensure that it is a functional requirement in the design phase. This may exclude some of your options, whether third party APIs or in house unexplainable models, to ensure you build a system that delivers what is required.

Techniques for explanations

Available techniques fall into two broad categories: inherently explainable where users can follow the model’s reasoning directly, and post-hoc (after the event) explanations that rely on correlation between input and output features to provide reasoning.

Inherently explainable models are those simple enough (or has regions of simplicity) that the end user can directly follow the decision and reasoning process to the result. A trivial example of this would be decision trees: ‘if a and b then c’. For other models, statistical techniques (e.g. Bayesian analysis, decision sets) at key levels can provide probabilistic interpretations that can help guide the user to understand what led to the result. 

In most cases, models are evolutions of pre-trained on known ‘good’ architectures, so there isn’t the luxury of designing from scratch. While interpretable layers can be added, most often data scientists will use post hoc explanations. Generally, these methods look at the input data and change or remove some of the features of it, or run the model in reverse, to determine the most important features in leading to the overall result This can help guide the user to key issues in their data that may not otherwise pick up.

A great example of this is the use of Shapley Values from game theory to determine which parts of the data affected the prediction positively or negatively by ‘removing’ those features.

Figure 1 An example of a Shapley values output presented graphically from a model looking at AML detection on a group of transactions. Features that contribute to a "suspicious" flag are highlighted in red and labelled for the end user.

Further techniques can include backpropagation through a network to determine importance (e.g. DeepLIFT). This can highlight regions of your data that were important for the output, not only at the beginning but also at key layers in the network. This can be time consuming and visualising the output for the end user can be a challenge for non-visual data.

Perturbation techniques such as LIME add noise randomly to the data to see how the predictions change, and then using these altered predictions compute the important features. A close companion of these techniques is to use anchors and counterfactuals in the data as a faster method to get to the explanation.

What does this mean for the future?

While this is a very brief overview of some of the more popular solutions available, all of the post hoc techniques provide correlations between the input data and the output result from the models. Care should be taken in taking an correlative explanation and assuming cause, particularly in high risk scenarios, which is why the EU are pursuing both explanations and humans in the loop for decision making in such cases. If you can, build in explainability from the beginning of your models.

About Janet Bastiman

Chair of the Royal Statistical Society’s Data Science and AI Section and member of FCA’s newly created Synthetic Data Expert Group, Janet started coding in 1984 and discovered a passion for technology. She holds multiple degrees and a PhD in Computational Neuroscience. Janet has helped both start-ups and established businesses implement and improve their AI offering prior to applying her expertise as Chief Data Scientist at Napier. Janet regularly speaks at conferences worldwide on topics in AI including explainability, testing, efficiency, and ethics.

About Napier

Napier is a new breed of financial crime compliance technology specialist. Our platform, Napier Continuum, is transforming compliance from legal obligation to competitive edge. Trusted by leading financial institutions, Napier uses industry knowledge and cutting-edge technologies such as AI and machine learning to help businesses comply with AML regulations, detect suspicious behaviours and fight financial crime.

Free Headlines in your E-mail

Every day we send out a free e-mail with the most important headlines of the last 24 hours.

Subscribe now

Keywords: artificial intelligence, machine learning, API, data analytics
Categories:
Companies: Napier
Countries: World

Napier

|
Discover all the Company news on Napier and other articles related to Napier in The Paypers News, Reports, and insights on the payments and fintech industry:





Industry Events