Hi everyone, I’m Rob Geada, an engineer on the
TrustyAI team, and I’m going to talk a little about my work with Shapley Additive Explanations,
or SHAP. Now before I go into the specifics of SHAP and how it works, I first have to talk
about the mathematical foundation it’s built on; and that’s Shapley values from game theory.
Shapley values were invented by Lloyd Shapley as a way of providing a fair
solution to the following question: if we have a coalition C that collaborates to
produce a value V, how much did each individual member contribute to that final value?
So what does this mean? We have a coalition C, a group of cooperating members that work together to
produce some value V, called the coalition value. This could be something like, a corporation of
employees that together generate a certain profit, or a dinner group running up a restaurant bill.
We want to know exactly how much each member contributed to that final coalition value; what
share of the profit does each employee deserve, how much each person in the dinner
party owes to settle the bill.
However, answering this gets tricky when there are
interacting effects between members, when certain permutations cause members to contribute more
than the sum of their parts. To find a fair answer to this question that takes into account these
interaction effects, we can compute the Shapley value for each member of the coalition.
So let’s compute the Shapley value for member 1 of our example coalition. The way this is done is by
sampling a coalition that contains member 1, and then looking at the coalition formed by removing
that member. We then look at the respective values of these two coalitions, and compare the
difference between the two. This difference is the marginal contribution of member 1 to the coalition
consisting of members 2, 3, and 4; how much member 1 contributed to that specific group.
So we then enumerate all such pairs of coalitions, that is, all pairs of coalitions that only differ
based on whether or not member 1 is included, and then look at all the marginal contributions
for each. The mean marginal contribution is the Shapley value of that member. We can do
this same process for each member of the coalition, and we’ve found a fair solution
to our original question. Mathematically, the whole process looks like this, but all we
need is to know that the Shapley value is the average amount of contribution that a particular
member makes to the coalition value.
Now, translating this concept to model
explainability is relatively straightforward, and that’s exactly what Scott Lundberg and Su-In Lee
did in 2017 with their paper “A Unified Approach to Interpreting Model Predictions,” where they
introduced SHAP. SHAP reframes the Shapley value problem from one where we look at how members
of a coalition contribute to a coalition value to one where we look at how individual
features contribute to a model’s outputs. They do this in a very specific way, one
that we can get a clue to by looking at the name of their algorithm; Shapley Additive
Explanations. We know what Shapley values are, we know what explanations are, but
what do they mean by additive?
Lundberg and Lee define an additive feature
attribution as follows: if we have a set a of inputs x, and a model f(x), we can define a set
of simplified local inputs x’ (which usually means that we turn a feature vector into a discrete
binary vector, where features are either included or excluded) and we can also define an explanatory
model g. What we need to ensure is that
One: if x’ is roughly equal to x then
g(x’) should be roughly equal to f(x),
and two: g must take this form, where
phi_0 is the null output of the model, that is, the average output of the model, and
phi_i is the explained effect of feature_i; how much that feature changes the output of the
model. This is called it’s attribution.
If we have these two, we have an explanatory
model that has additive feature attribution. The advantage of this form of explanation is
really easy to interpret; we can see the exact contribution and importance of each feature
just by looking at the phi values.
Now Lundberg and Lee go on to describe
a set of three desirable properties of such an additive feature method; local
accuracy, missingness, and consistency. We’ve actually already touched upon local
accuracy; it simply says if the input and the simplified input are roughly the same, then
the actual model and the explanatory model should produce roughly the same output.
Missingness states that if a feature is excluded from the model, it’s attribution
must be zero; that is, the only thing that can affect the output of the explanation model is the
inclusion of features, not the exclusion. Finally, we have consistency (and this one’s a little hard
to represent mathematically), but it states that if the original model changes so that the a
particular feature’s contribution changes, the attribution in the explanatory model
cannot change in the opposite direction; so for example, if we have a new model where a
specific feature has a more positive contribution than in the original; the attribution in our
new explanatory model cannot decrease.
Now while a bunch of different explanation methods
satisfy some of these properties, Lundberg and Lee argue that only SHAP satisfies all three;
if the feature attributions in our additive explanatory model are specifically chosen
to be the shapley values of those features, then all three properties are upheld. The problem
with this, however, is that computing Shapley values means you have to sample the coalition
values for each possible feature permutation, which in a model explainability setting means we
have to evaluate our model that number of times. For a model that operates over 4 features, it’s
easy enough, it’s just 64 coalitions to sample to get all the Shapley values. For 32 features,
that’s over 17 billion samples, which is entirely untenable. To get around this,
Lundberg and Lee devise the Shapley Kernel, a means of approximating shapley
values through much fewer samples.
So what we do is we pass samples through
the model, of various feature permutations of the particular datapoint that
we’re trying to explain. Of course, most ML models won’t just let you omit a feature,
so what we do is define a background dataset B, one that contains a set of representative
datapoints that the model was trained over. We then fill in our omitted feature or features
with values from the background dataset, while holding the features that are included in
the permutation fixed to their original values. We then take the average of the model output
over all of these new synthetic datapoints as our model output for that feature permutation,
which we’ll call that y bar.
So once we have a number of samples computed in
this way, we can formulate this as a weighted linear regression, with each feature assigned
a coefficient. With a very specific choice of weighting for each sample, based on a combination
of the total number of features in the model, the number of coalitions with the same
number of features as this particular sample, and the number of features included and excluded
in this permutation, we ensure that the solution to this weighted linear regression is such
that the returned coefficients are equivalent to the Shapley values. This weighting
scheme is the basis of the Shapley Kernel, and the weighted linear regression
process as a whole is Kernel SHAP. Now, there are a lot of other forms of SHAP that
are presented in the paper, ones that make use of model-specific assumptions and optimizations to
speed up the algorithm and the sampling process, but Kernel SHAP is the one among them that is
universal and can be applied to any type of machine learning model. This general applicability
is why we chose Kernel SHAP as the first form of SHAP to implement for TrustyAI; we want to be
able to cover every possible use-case first, and add specific optimizations later.
At the moment of recording this video, the 18th of March 2021, I’d estimate I’m about
85% done with our implementation, and it should be ready in a week or so. Since our
version isn’t quite finished yet, I’ll run through an example of the Python SHAP
implementation provided by Lundberg and Lee.
So first I’ll grab a dataset to run our example
over, and I’ve picked the Boston housing price dataset, which is a dataset consisting of
various attributes about Boston neighborhoods and the corresponding house prices within that
neighborhood. Next, I’ll train a model over that dataset, in this case an XGBoost regressor. Let’s
take a quick look at the performance of our model, just to make sure the model we’ll
be explaining is actually any good. Here I’m comparing the predicted
house value on the x axis to the actual house value on the y axis, and we can
see that our plot runs pretty close to y=x, indicating that our model is relatively decent; it
has a mean absolute error of 2.27; so about 5-10% error given the magnitude of the predictions,
more than good enough for our purposes. So now that we have a model, let’s imagine we’re
trying to use it to predict the value of our own house. So we’ll take a look at the input
features, we’ll fill them out, and we’ll pass them through the model, and we see that our house
has a value of around 22 thousand dollars. But, why? To answer that, let’s set up a Kernel SHAP
explainer; we’ll pass it our prediction function and some background data. Next, we’ll pass it
our sample datapoint, the one we created earlier. Before we take a look at the SHAP values, let’s
make sure local accuracy is upheld, that our explanatory model is equivalent to the original
model. We’ll do this by adding the model null to the sum of the SHAP values, and we find
that they are exactly identical. Perfect, both our original model and our explanatory
model make sense. Now we can take a look at the SHAP values for each feature. Here I present
the value of the feature in our sample datapoint, the attributions of each feature, as well as
the average value that each feature takes in the background data, just so we know which
direction of change caused the attribution. We can see that the biggest attributions
are from the below average crime rate, the above average number of rooms, and the above
average percentage of neighbors with low income.
The question is, however, are these true? Did,
for example, this last feature cause us to lose exactly $1,190 from the value of our house? We can
test this by passing our datapoint back through the model, replacing the last feature with values
from the background dataset. The average of these outputs is our new house value, having excluded
the Lower Status feature. Our original datapoint predicted 22.09, while the datapoint with the
excluded status predicted 24.42 on average. That’s a change of around 2.33, almost double the change
predicted by SHAP. So where did SHAP go wrong?
Well, all these attributions come from a weighted
linear regression, one trained over noisy samples. There is going to be implicit error in each of the
attributions, giving each one an error bound. The existing Python implementation by Lundberg and
Lee doesn’t report these error bounds, so it’s very possible that the delta of -1.19 is actually
-1.19 plus or minus 1.19. This is something the TrustyAI implementation will remedy, so that we
can ensure the outputted attributions and bounds always match reality, and are therefore,
trustworthy. But until then, that’s all I’ve got time for, I’d love to hear any questions
you may have, and thanks so much for listening!