Explainable AI

Jun
07
Approximating Shapley Values for Machine Learning

Approximating Shapley Values for Machine Learning

The how and why of Shapley value approximation, explained in code
6 min read
Dec
31
A power set of feature coalitions.

How Shapley Values Work

In this article, we will explore how Shapley values work - not using cryptic formulae, but by way of code and simplified explanations
10 min read
May
16
Supervised Clustering: How to Use SHAP Values for Better Cluster Analysis

Supervised Clustering: How to Use SHAP Values for Better Cluster Analysis

Cluster analysis is a popular method for identifying subgroups within a population, but the results are often challenging to interpret
9 min read
Nov
01
Explaining Machine Learning Models: A Non-Technical Guide to Interpreting SHAP Analyses

Explaining Machine Learning Models: A Non-Technical Guide to Interpreting SHAP Analyses

With interpretability becoming an increasingly important requirement for machine learning projects, there's a growing need for the complex outputs of techniques such as SHAP to be communicated to non-technical stakeholders.
12 min read