Comments (5)
Thanks for your question. Shap values are additive values that sum to the model output - prediction baseline. In order to interpret your shap values correctly just scrap the first dimension and you end up with the feature shape. If everything is as in your post, then the shap values sum up perfectly. You can verify this manually by doing
assert np.allclose(np.sum(shap_val, axis=[0, 1, 2, 3, 4]) + np.sum(explainer.base_values), model.predict(input_val))
How you can interpret the values depends on what your features actually mean but this is domain specific knowledge that we cannot provide. For inspiration you can have a look at our notebooks (e.g. https://shap.readthedocs.io/en/latest/example_notebooks/tabular_examples/tree_based_models/NHANES%20I%20Survival%20Model.html)
from shap.
Thank you very much for your answer!
So, when I try to manually verify, I get the error AttributeError: 'DeepExplainer' object has no attribute 'base_values'
. Maybe what I need may be the attribute explainer.expected_value
?
Also, if my understanding (and my guess about the explainer.expected_value
) is correct, the prediction value for the first pair in my dataset should corresponds to:
np.sum(shap_val[0][0][0]) + np.sum(shap_val[0][1][0]) + explainer.expected_value
According to this, shouldn't the manual check be in the following form
assert np.allclose(np.sum(shap_val, axis=(0,1,-2) + explainer.base_values, model.predict(input_val))
collapsing by axis=(0,1,-2)
? I ask because doing like this I get an array in the form (3,1)
, which is the same shape as my predictions and it should correspond to the sum of the 691+961 shap values for the pair, while by collapsing on axis=(0,1,2,3,4)
I get an empty list (but I am not practical, so I may be wrong).
Last doubt (I swear) is the following. Since I am using a Siamese, my input consists of two instances having the same features. So, for a pair of instances, I get a total of 691 + 691 = 1922 shap values for each prediction. However, I would like to get a sigle shap value for each pair of corresponding features. I know from these issues 1,2 that shap values can be grouped among features by summing their values, but would it make sense also in this specific case? Naively, I would take the average among their absolute values, but I am not sure.
Thank you again for your quick reply, having a feedback actually makes my feel much safer about what I am actually doing.
from shap.
You are correct. It is base_values
. Basically you want to sum over all dimensions feature (not observation) dimensions. What's the shape of one single observation?
from shap.
So, a single observation is an array of length 961, but the model receives them in pairs, so each prediction is obtained from a pair. RIght now I am using a very small input, which is called pairs
, and it is in the form:
np.shape(pairs)
(2, 3, 691, 1)
Where I have a list containing 2 list, each containing 3 instances (lists), each of them having 691 values (features). The pairs used in the model are the corresponding instances in the two lists (so, fo example, the first pair corresponds to the elements pairs[0][0],pairs[1][0]
from shap.
Aren't np.shape(pairs)
3 different inputs? Then it would be
np.allclose(pairs.sum(axis=[0, 2, 3]), explainer.base_values, model.predict(input_val))
but you probably can figure that out yourself
from shap.
Related Issues (20)
- ENH: Not showing nan values on Beeswarm or violin plots
- BUG: Discrepancy among SHAP beeswarm and Seaborn swarmplots
- ENH: Support SeLU activations for Tensorflow DeepExplainer
- BUG: AssertionError: The SHAP explanations do not sum up to the model's output HOT 1
- [Meta-issue] deprecation tracker for upcoming releases HOT 2
- BUG: missing **dmatrix_props in shape_interaction module HOT 1
- BUG: TreeExplainer ignores "link" argument passed by the Explainer API HOT 1
- BUG: SHAP Partition explainer fails for a single token text input HOT 5
- BUG: Shap value cannot be calculated according to NLP tutorial on the original website HOT 17
- Segmentation Fault on MacOS with pytorch > 2.2.0 HOT 4
- ENH: Faster import performance HOT 1
- ENH: Label dots in scatterplot according to classes and add a legend
- BUG: SHAP DeepExplainer cannot get SHAP values from TorchScript model HOT 1
- CIBuildWheel failing on windows runners
- ENH: Limiting number of CPU cores used by shap HOT 4
- [Meta issue] Release 0.45.0 HOT 5
- BUG: Waterfall feature names IndexError HOT 1
- BUG: DeepExplainer throws error when using `__call__`
- CI failing on tensorflow 2.16+ due to incompatibility between transformers & keras V3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from shap.