Article

AI – time to adopt an explainability model?

Stephen Thompson Stephen Thompson

Over the last 20 years, the use of artificial intelligence (AI) has boomed across every industry and is routinely applied to increase productivity, automate menial tasks and improve customer satisfaction.

Driven by machine learning (ML) decision-making models, the technology has grown at such a pace that regulation and best practice have struggled to keep up.

One such element is the ability to understand and explain why AI has made a decision and taken the associated action. For some businesses, there is a legal or regulatory obligation to explain how a decision was reached and it is also an integral element of the European Commission's 'Ethical Guidelines for Trustworthy AI'. But with such complex mathematics and computer science in play, do you understand what’s going on inside the black box? And do you have a model in place to explain what your AI is doing?

Trustworthiness is key to adoption

Explainable AI (XAI) is key to trustworthy AI and may have a significant impact in how the industry develops in the future. Ideally, anything so integral to widespread adoption should be factored into the development of each and every AI program as a key building block but, due to the nature of the tech, this just isn’t possible. Instead, it should be built on top of your software and model development frameworks to support your machine learning and AI strategy.

XAI can be a moral and legal obligation

The importance of machine learning explicability goes beyond the technical aspect of improving modelling capabilities and can be applied across businesses. In credit risk, for instance, we are starting to see black box models being implemented, such as stochastic gradient boosting, to calculate credit-worthiness. Here, the business has a legal obligation to provide explanations to customers, but in a paradigm where no simple answer exists. Businesses need XAI in order to reap the benefits of advanced modelling while maintaining regulatory and legal rigor.

XAI can also help audit decision making processes (both human and machine). It offers greater understanding around, and monitoring of, the key features driving important decisions and can be used to craft richer audit trails and more rigorous justifications – for example in regulatory reporting.

Linear regression is the easiest to explain

For traditional machine learning methods, such as linear or logistic regression or a decision tree, understanding how the model works globally and determining the reason behind an individual decision is relatively simple: follow the weighting of each of the individual features of your data point and you have your explanation. However, cutting-edge ML solutions have moved far beyond linear regression.

Explainable AI is a new field of research

We are seeing a neural net renaissance and newer tree-based models like random forests and extreme gradient boosting are leaving traditional models in the dust. The more advanced the model however, the more ambiguous its functionality. In the same way that you cannot trace the exact electron through a human brain to determine why they made a certain decision, it’s beyond human comprehension to trace advanced ML decision processes.

But this type of technology has too much potential to be ignored, so it’s vital to be able to explain the model – giving rise to a new field of research around XAI.

It’s the final stage of the AI framework

Since peeking inside the black box isn’t possible, XAI is built alongside your ML program to explain how it works and provide an additional perspective on it. The key benefits of these explainability models are:

Trust: good explanations increase user trust in the model. There are even instances (such as determining credit approval) where providing a trustworthy justification for a decision is a legal requirement.

Reliability: the more people who understand the model the better and a fresh perspective on a model can help the developer improve the AI. Understanding why a model has made a decision also means you can check the logic of that decision. For example, the software that could accurately tell huskies and wolves apart made the right decision based on the wrong logic – relying on the presence of snow in the picture, rather than the dogs themselves.

Ethics: being able to explain a decision is essential for controlling biases present in your model. Identifying biases early prevents them from snowballing out of control.

Types of explainability model

The two predominant explainability models we are seeing emerge are Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP):

  • LIME works by fitting simple, interpretable (linear or single decision tree) models on the local neighbourhood of an individual decision, rather than across the whole dataset. In short, this means you don’t have to look at all every data point to get a rough idea of how a decision was made, just similar ones. For example, imagine you are trying to value a house with your model. You might run through similar houses (similar square meterage, locations, number of bedrooms etc), to build a simple linear regression to determine how each of the features impacted the decision made. You don’t need to look at the prices of mansions in Edinburgh in order to understand how your maisonette in Fulham has been valued.
  • SHAP works similarly to a regression model but uses Shapley values as the feature weightings. Shapley values come from game theory and are used to calculate how much each player in a coalition has contributed to the outcome. All possible subsets of other players are considered and the impact of the addition of the player is calculated in each case. In the case of SHAP, each feature of your data is considered a player in a coalition and the relative contribution (Shapley value) is calculated and used as the weightings for a regression model. SHAP can be used for individual explanations or can be aggregated to give global interpretability to your model.

Choosing the right model for you

Both of these solutions are now relatively simple to implement in Python or R and have packages that come with a range of built in tools for visualisations. They can also both be adapted to work with different data formats - tabular, text or image. It’s really about choosing the right one for you.

  LIME SHAP
Simplicity of Explanations Very human-friendly, understandable interpretations. Key for non-technical employees or customers. Shapley values require advanced mathematical/game theory experience to interpret reliably.
Computational Requirement Simple to implement in Python or R. Can take a long time to calculate every subset of features (increases exponentially with the number of features).
Reliability of Explanations Models are very volatile. Small changes in the definition of ‘local’ can lead to vastly different results. Shapley values are mathematically rigorous, so provide fully justifiable results.
Global or Individual Explanations Can only provide individual explanations. Can provide both global and individual explanations.

What to do next

Modern machine learning has too much potential to ignore. If your business isn’t keeping up with it then it will be left behind. But all innovation should be considered with caution – blindly adopting black box solutions has the potential to open you up to a litany of problems in the future, both technical and ethical, if not properly considered. As such you should consider which explainability model is right for your AI and take the necessary steps to build it into your model development frameworks.

Please contact us for further information and help with applying an appropriate explainability model.

By Timothy Clifton-Wright

Article
Can AI ever really be trustworthy? Find out more
Blog Is AI the future of hacking? Find out more