Making The Black Box Of AI Transparent With eXplainable AI (XAI)
Artificial Intelligence(AI) systems are being used more and more in our everyday lives to help us in decision-making.
When a human makes a decision you can ask them how they came to their decisions or predictions. But with many AI algorithms, a prediction is provided without any particular reason, and you can not ask that machine to explain how it came to that decision.
It can be dangerous to rely on a black box. How can you trust a model without transparency? We need accountability and explanation to trust these systems. The more AI is part of our lives the more we need these black boxes to be transparent.
The solution is to come up with AI systems that explain their decision-making — what is known as eXplainable AI or XAI.
The goal of XAI is to provide confirmable explanations of how machine learning systems make decisions and let humans be in the loop.
Explainability is not as easy as it looks. The more complicated a system gets, the more that the system is making thousands of millions of connections between different pieces of data. For example, when a system is doing image classification in healthcare, Early diagnosis of pneumonia from X-rays, it is using around 3 to 30 million parameters to decide whether a person has pneumonia or not. When we try to explain why we came to this decision for each person, we have a big computational cost which is very critical in deep learning.
Explainability Tradeoff:
Having AI that is trustworthy reliable and explainable to a human is a must in some industries like healthcare, banking, insurance, automobile. We may not need to understand why a Youtube video is recommended to us but we certainly want to understand why Healthcare AI prognosis system predicts diabetes risk for us. However, there is a question: Are the explainable AI efforts lessen the motivation to make the AI algorithm better? Because when the algorithm gets more accurate, it gets more complicated and difficult to interpret. When it comes to deep learning the black box gets obscure. How can we get explainable AI without sacrificing AI performance?
There are two ways to produce explainable AI:
1.Ease up: Use machine learning approaches that are inherently explainable. Such as decision trees, Bayesian classifiers.
2. Struggle: Develop new ways to explain complex neural network approaches. Researchers and institutions are currently working on methods explaining more complicated machine learning methods
XAI Methods: SHAP values
There are many methods for creating eXplainable AI. I will give a brief explanation of SHAP values. SHAP values are created by Scott Lundberg at the University of Washington in 2018.
Steps to calculate the SHAP values
1. Install
Shap can be installed from either PyPI or conda-forge:
pip install shapconda install -c conda-forge shap
2. Import
import shap
3. Load Javascript visualization code
shap.initjs()
4. Train the model
# train XGBoost model
X,y = shap.datasets.boston()
model = xgboost.train({"learning_rate": 0.01}, xgboost.DMatrix(X, label=y), 100)
5. Explain the model’s predictions
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X)
6. Visualize the predictions you want
shap.force_plot(explainer.expected_value, shap_values[0,:], X.iloc[0,:])
For more information visit the GitHub page of Scott Lundberg.
Conclusion
We did a quick overview of how Shapley values are being applied to create interpretable and explainable models. SHAP values are great tools for developing XAI.
Any suggestions to seymatas@gmail.com will be very appreciated!