Giter Club home page Giter Club logo

explainerdashboard's Introduction

Metrics

explainerdashboard's People

Contributors

absynthe avatar achimgaedke avatar brandonserna avatar haizadtarik avatar hugocool avatar jenoovchi avatar mekomlusa avatar oegedijk avatar oegesam avatar rajgupt avatar raybellwaves avatar sa-so avatar salomonj11 avatar simon-free avatar tunayokumus avatar woochan-jang avatar yanhong-zhao-ef avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

explainerdashboard's Issues

Internal Server Error

First time trying the dashboard.

I have a small python file which looks as follows

import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from explainerdashboard import RegressionExplainer, ExplainerDashboard

X_train = pd.read_pickle("X_train.pkl")
X_test = pd.read_pickle("X_test.pkl")
y_train = pd.read_pickle("y_train.pkl")
y_test = pd.read_pickle("y_test.pkl")

model = RandomForestRegressor(n_jobs=-1, random_state=42)

model = model.fit(X_train, y_train)

explainer = RegressionExplainer(model, X_test, y_test, shap="tree")
ExplainerDashboard(explainer).run()

Also i'm on windows and installed explainerdashboard using

conda create -n test_env python=3.8
conda activate test_env
pip install explainerdashboard

When running the code I get

(test_env) C:\Users\131416\python\modelling>python explainer_dashboard.py
Generating self.shap_explainer = shap.TreeExplainer(model)
Changing class type to RandomForestRegressionExplainer...
Building ExplainerDashboard..
Generating ShadowDecTree for each individual decision tree...
Generating layout...
Calculating shap values...
Calculating predictions...
Calculating residuals...
Calculating absolute residuals...
Calculating dependencies...
Calculating importances...
Calculating shap interaction values...
Registering callbacks...
Starting ExplainerDashboard on http://localhost:8050
Dash is running on http://127.0.0.1:8050/

 * Serving Flask app "explainerdashboard.dashboards" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://127.0.0.1:8050/ (Press CTRL+C to quit)
Exception on / [GET]
Traceback (most recent call last):
  File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\flask\app.py", line 2447, in wsgi_app
    response = self.full_dispatch_request()
  File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\flask\app.py", line 1945, in full_dispatch_request
    self.try_trigger_before_first_request_functions()
  File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\flask\app.py", line 1993, in try_trigger_before_first_request_functions
    func()
  File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\dash\dash.py", line 1093, in _setup_server
    _validate.validate_layout(self.layout, self._layout_value())
  File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\dash\_validate.py", line 348, in validate_layout
    raise exceptions.DuplicateIdError(
dash.exceptions.DuplicateIdError: Duplicate component id found in the initial layout: `whatif-SL-input-qHjwzsAtGC`
127.0.0.1 - - [11/Nov/2020 07:48:30] "GET / HTTP/1.1" 500 -
Exception on /favicon.ico [GET]
Traceback (most recent call last):
  File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\flask\app.py", line 2447, in wsgi_app
    response = self.full_dispatch_request()
  File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\flask\app.py", line 1945, in full_dispatch_request
    self.try_trigger_before_first_request_functions()
  File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\flask\app.py", line 1993, in try_trigger_before_first_request_functions
    func()
  File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\dash\dash.py", line 1093, in _setup_server
    _validate.validate_layout(self.layout, self._layout_value())
  File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\dash\_validate.py", line 348, in validate_layout
    raise exceptions.DuplicateIdError(
dash.exceptions.DuplicateIdError: Duplicate component id found in the initial layout: `whatif-SL-input-qHjwzsAtGC`
127.0.0.1 - - [11/Nov/2020 07:48:30] "GET /favicon.ico HTTP/1.1" 500 -

decision_trees plot fails for RandomForestRegressor with sklearn 0.24

With the just released sklearn 0.24, the .classes_ attribute has been deprecated for sklearn.tree.DecisionTreeRegressor,
resulting in the following error in plot_rf_trees(...):

>       if model.estimators_[0].classes_[0] is not None: #if classifier
E       AttributeError: 'DecisionTreeRegressor' object has no attribute 'classes_'

option to change colors of waterfall plot

I understand shap plots use red for positive (up) and blue for negative (down).

From a lay person point of view they are probably use to seeing green for positive (up), red for negative (down) and blue for final.

It would be nice to have an option to ExplainerDashboard to do this. e.g. waterfall_plot_colors='rg'

image

question: provide cats when data in already OHE?

I have a dataframe which has been one-hot-encoded.

The original data could be
NAME
Ray
Oege

Therefore, the data going in looks like
Ray Oege
1 0
0 1

I'm not sure how to pass this in to the expainer?
Some how it needs to know that Ray and Qege where once associated with a column called NAME.

I like the 'group cats' button in the dashboard

Showing observed label in ClassifierPredictionSummaryComponent

Hi,

I am using feature_input_component to link ClassifierPredictionSummaryComponent to FeatureInputComponent. Changing index in FeatureInputComponent correctly change table and pie chart in ClassifierPredictionSummaryComponent. However, the observed label * indicator is not working in this case.

Thanks,

Including feature dropped in One-Hot-Encoding to cats / FeatureInputComponent

I just realized that there is a parameter cats - grouping one-hot-encoded variables back together is a thing I'm missing in Python since forever, great stuff! In particular, it simplifies the otherwise huge FeatureInputComponent.

However, I'm usually dropping one level when encoding, which is of course missing in the respective dropdown menu in FeatureInputComponent. Is it possible to include the dropped level so that it can be chosen in FeatureInputComponent as well?

RegressionExplainer.from_file Q

I believe there is a typo in the release notes

https://github.com/oegedijk/explainerdashboard/blob/93a798995d1910774d6423c47c02013c0e51d862/RELEASE_NOTES.md#new-features-7

explainer.from_file() should be BaseExplainer.from_file()

Also not sure if this command is in the docs anywhere? Ahh think I found it

https://github.com/oegedijk/explainerdashboard/blob/22972df6c7bd0edc69838578235e11048df425a5/docs/source/cli.rst#explainerfrom_file

I know it would be breaking changes but any reason it was named "from_file" and not "load"?

to_yaml misses some kwargs e.g. no_permutations

I'm adding a few settings to the dashboard then saving as a yaml file

db = ExplainerDashboard(
    explainer,
    importances=False,
    model_summary=False,
    decision_trees=False,
    no_permutations=True,
    hide_depth=True,
    hide_pdp=True,
    title=title,
)
db.to_yaml("config/dashboard.yaml", explainerfile="config/explainer.joblib")

When loading the yaml it is missing some of these key works.

e.g. no_permutations, hide_depth, hide_pdp

I'm aware these are **kwargs to ExplainerComponents but it would be nice if the to_yaml also captured https://explainerdashboard.readthedocs.io/en/latest/custom.html?highlight=no_permutations#passing-parameters-as-kwargs

Feature request: Decision Trees tab: adjust location of labels for decision trees

Again, loving this package.

Using

explainer = RegressionExplainer(model, X_test, y_test, cats=cats, shap="tree")
ExplainerDashboard(explainer).run()

There are a couple of instances where the y label is on top of the avg pred label (see below) on the decision trees tab. Would be good if somehow they could be slightly separated. In addition, perhaps y could be renamed to a friendlier common term 'observed'?

image

Addition: A SimplifiedClassifierDashboard

The default dashboard can be very overwhelming with lots of tabs, toggles and dropdowns. It would be nice to offer a simplified version. This can be built as a custom ExplainerComponent and included in custom, so that you could e.g.:

from explainerdashboard import ClassifierExplainer, ExplainerDashboard
from explainerdashboard.custom import SimplifiedClassifierDashboard

explainer = ClassifierExplainer(model, X, y)
ExplainerDashboard(explainer, SimplifiedClassifierDashboard).run()

It should probably include at least:

  • Confusion matrix + one other model quality indicator
  • Shap importances
  • Shap dependence
  • Shap contributions graph

And ideally would add in some dash_bootstrap_components sugar to make it look extra nice, plus perhaps some extra information on how to interpret the various graphs.

Error due to check_additivity

Hi,

I am getting the following error message for the random forest model:
Exception: Additivity check failed in TreeExplainer! Please ensure the data matrix you passed to the explainer is the same shape that the model was trained on. If your data shape is correct then please report this on GitHub. Consider retrying with the feature_perturbation='interventional' option. This check failed because for one of the samples the sum of the SHAP values was 0.626204, while the model output was 0.710000. If this difference is acceptable you can set check_additivity=False to disable this check.

Do you have any suggestion to solve the issue?

Thanks,
Saman

Request: Plotting parameters in WhatifComponent

As far as I can see, WhatifComponent includes ShapContributionsGraphComponent, but does not include plot styling parameters such as "sort" and "orientation". I think it makes sens to add these :-)

Problem with deploying a custom dashboard using waitress

I am currently trying to deploy explainer dashboards using waitress. Whereas it's working fine using the vanilla regression dashboard, there is an error when running waitress-serve --call "dashboard:create_c_app" where dashboard.py is

from explainerdashboard import ExplainerDashboard
from custom_tabs import *

def create_c_app():
    db = ExplainerDashboard.from_config("dashboard.yaml")
    app = db.flask_server()
    return app

The error is

Traceback (most recent call last):
  File "C:\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\...\venv\Scripts\waitress-serve.exe\__main__.py", line 7, in <module>
  File "c:\...\venv\lib\site-packages\waitress\runner.py", line 280, in run
    app = app()
  File "C:\...\dashboard.py", line 10, in create_c_app
    db = ExplainerDashboard.from_config("dashboard.yaml")
  File "c:\...\venv\lib\site-packages\explainerdashboard\dashboards.py", line 483, in from_config
    tabs = cls._yamltabs_to_tabs(dashboard_params['tabs'], explainer)
  File "c:\...\venv\lib\site-packages\explainerdashboard\dashboards.py", line 611, in _yamltabs_to_tabs
    return [instantiate_tab(tab, explainer)  for tab in yamltabs]
  File "c:\...\venv\lib\site-packages\explainerdashboard\dashboards.py", line 611, in <listcomp>
    return [instantiate_tab(tab, explainer)  for tab in yamltabs]
  File "c:\...\venv\lib\site-packages\explainerdashboard\dashboards.py", line 600, in instantiate_tab
    tab_class = getattr(import_module(tab['module']), tab['name'])
AttributeError: module '__main__' has no attribute 'CustomDescriptiveTab

The .yaml file looks like this:

[...]
    decision_trees: true
    tabs:
    - name: CustomDescriptiveTab
      module: __main__
      params: null
    - name: CustomFeatImpTab
      module: __main__
      params: null
    - name: CustomWhatIfTab
      module: __main__
      params: null

I don't understand why running via waitress is not working whereas loading the dashboard using ExplainerDashboard.from_config("dashboard.yaml") and running it locally works fine.

Addition: SimplifiedRegressionDashbaord

The default dashboard can be very overwhelming with lots of tabs, toggles and dropdowns. It would be nice to offer a simplified version. This can be built as a custom ExplainerComponent and included in custom, so that you could e.g.:

from explainerdashboard import RegressionExplainer, ExplainerDashboard
from explainerdashboard.custom import SimplifiedRegressionDashboard

explainer = RegressionExplainer(model, X, y)
ExplainerDashboard(explainer, SimplifiedRegressionDashboard).run()

It should probably include at least:

predicted vs actual plot
Shap importances
Shap dependence
Shap contributions graph

And ideally would add in some dash_bootstrap_components sugar to make it look extra nice, plus perhaps some extra information on how to interpret the various graphs.

hide pdp on What if... tab

Thanks for working on this (#41).

Just tested and the pdp plot remained in the What if... tab. Is they a way to remove it similar to removing it from the Individual Predictions tab?

from explainerdashboard.datasets import (
    titanic_fare,
    titanic_names,
    feature_descriptions,
)
from sklearn.ensemble import RandomForestRegressor
from explainerdashboard import RegressionExplainer, ExplainerDashboard

X_train, y_train, X_test, y_test = titanic_fare()

model = RandomForestRegressor(n_estimators=50, max_depth=5)
model.fit(X_train, y_train)

train_names, test_names = titanic_names()

explainer = RegressionExplainer(
    model,
    X_test,
    y_test,
    cats=["Sex", "Deck", "Embarked"],
    idxs=test_names,
    target="Fare",
    descriptions=feature_descriptions,
    units="$",
)

db = ExplainerDashboard(
    explainer,
    importances=False,
    model_summary=False,
    decision_trees=False,
    no_permutations=True,
    hide_depth=True,
    hide_pdp=True,
)
db.run()

ModuleNotFoundError: No module named 'numba.serialize'

what is the correct version of the numba package to run the ClassifierExplainer.from_file ?? when I try to run the code below, I get the following message: ModuleNotFoundError: No module named 'numba.serialize'. My current version of numba is 0.52.0

attempted code:
from flask import Flask
from explainerdashboard import ClassifierExplainer, ExplainerDashboard

app = Flask(name)

explainer = ClassifierExplainer.from_file("explainer.joblib")

db = ExplainerDashboard(explainer, server=app, url_base_pathname="/dashboard/")

@app.route('/dashboard')
def return_dashboard():
return db.app.index()

app.run()

Layout questions

I built (well, it's highly unfinished) a dashboard motivated by an usecase in Predictive Maintenance (https://pm-dashboard-2020.herokuapp.com/). However, I wasn't able to align the (grey background of the) header of my cards (containing the description) with the header of the cards of the built-in plots. Did you set any global configuration there!?
Something similar happened when I included self-built plotly plots - creating them in the main file resulted in a different layout compared with the creation in a separate notebook ...

Plus, I have two wishes regarding the feature input component: ๐Ÿ˜

  1. Fix the spacing between variable name and field resp. dropdown menu
  2. Allow more than two columns (and one as well I guess) such that the component isn't unnecessary large if I, for instance, remove the card containing the map

Failed to guess the type of shap explainer to use

First time using this code. Thanks.

When I try

explainer = RegressionExplainer(model, X_test, y_test)

I get

ValueError: Failed to guess the type of shap explainer to use. Please explicitly pass either shap='tree', 'linear', deep' or 'kernel'.

i'm not sure where to put this

RandomIndex and FeatureInput not usable at the same time (anymore)

I updated on 0.2.13 and I'm really liking the cards, thanks for that! Actually, I wrapped my components into dbc.Cards before...๐Ÿ˜

Reworking my WhatIfTab I found that RegressionRandomIndexComponent and FeatureInputComponent are not getting along. Is this intended? More precisely, if my class looks like

self.index = RegressionRandomIndexComponent(explainer,
                                                    hide_title=False, hide_index=False,
                                                    hide_slider=True, hide_labels=True,
                                                    hide_pred_or_perc=True,
                                                    hide_selector=True, hide_button=False)

self.input = FeatureInputComponent(explainer)
        
self.contributions = ShapContributionsGraphComponent(explainer, depth=7,
                                                             hide_title=False, hide_index=False,
                                                             hide_depth=True, hide_sort=True,
                                                             hide_orientation=True, hide_cats=True,
                                                             hide_selector=True,
                                                             feature_input_component=self.input
                                                             )
        
self.connector = IndexConnector(self.index, [self.contributions])

the SHAP plot only refreshes after changing in the FeatureInputComponent. If I remove feature_input_component=self.input, it (obviously) only refreshes after hitting "Random Index".

Edit: This is probably connected with forcing hide_index=True when using FeatureInputComponent aswell. Is this strictly neccessary or only convenient for implementation reasons?

Plot vs. feature empty

(Tab "model_summary" in vanilla dashboard)
I'm using a non-special self-made dataset so I'm not sure why it's broken. Here is the error:

Exception on /_dash-update-component [POST]
Traceback (most recent call last):
  File "c:\users\...\venv\lib\site-packages\flask\app.py", line 2447, in wsgi_app
    response = self.full_dispatch_request()
  File "c:\users\...\venv\lib\site-packages\flask\app.py", line 1952, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "c:\users\...\venv\lib\site-packages\flask\app.py", line 1821, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "c:\users\...\venv\lib\site-packages\flask\_compat.py", line 39, in reraise
    raise value
  File "c:\users\...\venv\lib\site-packages\flask\app.py", line 1950, in full_dispatch_request
    rv = self.dispatch_request()
  File "c:\users\...\venv\lib\site-packages\flask\app.py", line 1936, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "c:\users\...\venv\lib\site-packages\dash\dash.py", line 1076, in dispatch
    response.set_data(func(*args, outputs_list=outputs_list))
  File "c:\...\venv\lib\site-packages\dash\dash.py", line 1007, in add_context
    output_value = func(*args, **kwargs)  # %% callback invoked %%
  File "c:\...\venv\lib\site-packages\explainerdashboard\dashboard_components\regression_components.py", line 370, in update_residuals_graph
    winsor=winsor, dropna=True)
  File "c:\users\...\venv\lib\site-packages\explainerdashboard\explainers.py", line 2551, in plot_residuals_vs_feature
    self.y[na_mask], self.preds[na_mask], col_vals[na_mask],
  File "c:\users\...\venv\lib\site-packages\pandas\core\series.py", line 902, in __getitem__
    key = check_bool_indexer(self.index, key)
  File "c:\users\...\venv\lib\site-packages\pandas\core\indexing.py", line 2183, in check_bool_indexer
    "Unalignable boolean Series provided as "
pandas.core.indexing.IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match).

Remove units for R2

I believe R-squared is unitless.

When doing

explainer = RegressionExplainer(..., units="$")

I see the snapshot below on the Model Performance tab.

capture

Updating to 0.2.19 broke custom dashboard

After updating to the most recent version, my custom dashboard isn't working anymore, neither after "inplace construction" nor after loading from disk. More precisely:

test_explainer = RegressionExplainer(dec_tree, X_small, y_small, cats=['Day_of_week', 'Hour', 'Vehicle', 'Position'])
ExplainerDashboard(test_explainer).run()

works fine but

explainer = RegressionExplainer(dec_tree, X_small, y_small, cats=['Day_of_week', 'Hour', 'Vehicle', 'Position'])
db = ExplainerDashboard(explainer, [VecPos, CustomFeatImpTabv2, CustomWhatIfTabv2])

yields

Building ExplainerDashboard..
Detected notebook environment, consider setting mode='external', mode='inline' or mode='jupyterlab' to keep the notebook interactive while the dashboard is running...
Generating layout...
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-8-d2ec058da33d> in <module>
      1 db = ExplainerDashboard(explainer,
----> 2                         [VecPos, CustomFeatImpTabv2, CustomWhatIfTabv2],
      3                         # external_stylesheets=[FLATLY],
      4                         # title='Predictive Maintenance'
      5                        )

c:\users\hkoppen\...\site-packages\explainerdashboard\dashboards.py in __init__(self, explainer, tabs, title, name, description, hide_header, header_hide_title, header_hide_selector, hide_poweredby, block_selector_callbacks, pos_label, fluid, mode, width, height, bootstrap, external_stylesheets, server, url_base_pathname, responsive, logins, port, importances, model_summary, contributions, whatif, shap_dependence, shap_interaction, decision_trees, **kwargs)
    510                             block_selector_callbacks=self.block_selector_callbacks,
    511                             pos_label=self.pos_label,
--> 512                             fluid=fluid))
    513         else:
    514             tabs = self._convert_str_tabs(tabs)

c:\users\hkoppen\...\site-packages\explainerdashboard\dashboards.py in __init__(self, explainer, tabs, title, description, header_hide_title, header_hide_selector, hide_poweredby, block_selector_callbacks, pos_label, fluid, **kwargs)
    128 
    129         self.selector = PosLabelSelector(explainer, name="0", pos_label=pos_label)
--> 130         self.tabs  = [instantiate_component(tab, explainer, name=str(i+1), **kwargs) for i, tab in enumerate(tabs)]
    131         assert len(self.tabs) > 0, 'When passing a list to tabs, need to pass at least one valid tab!'
    132 

c:\users\hkoppen\...\site-packages\explainerdashboard\dashboards.py in <listcomp>(.0)
    128 
    129         self.selector = PosLabelSelector(explainer, name="0", pos_label=pos_label)
--> 130         self.tabs  = [instantiate_component(tab, explainer, name=str(i+1), **kwargs) for i, tab in enumerate(tabs)]
    131         assert len(self.tabs) > 0, 'When passing a list to tabs, need to pass at least one valid tab!'
    132 

c:\users\hkoppen\...\site-packages\explainerdashboard\dashboards.py in instantiate_component(component, explainer, name, **kwargs)
     64 
     65     if inspect.isclass(component) and issubclass(component, ExplainerComponent):
---> 66         component = component(explainer, name=name, **kwargs)
     67         return component
     68     elif isinstance(component, ExplainerComponent):

TypeError: __init__() got an unexpected keyword argument 'name'

Offtopic: I am going to deploy the dashboard using Docker in one or two weeks, hopefully I am able to give some feedback on the respective issue then.

how to hide partial dependence plots?

AFAIK the key hide_pdp should hide the pdp plot (from all components?)

When running

db = ExplainerDashboard(
    explainer,
    hide_pdp=True,
)

It doesn't seem to remove the pdp plots from any tabs (Individual Predictions, What if...)

ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()

I keep getting this error for any and all ind of model running. I even tried running the example dataset- Titanic, even that ended up throwing same error.
Is there any update in process because of which this is happening, because it explainer dashboard was working fine until 2 days ago. I would appreciate a quick response. :)
This is really cool implementation and is very useful in my current work environment, thank you very much for working on this.

denied access to dashboard from explainer hub

Code is similar to

from explainerdashboard import ExplainerDashboard, ExplainerHub
import os

db1 = ExplainerDashboard.from_config("config/dashboard_ds-team.yaml")
db2 = ExplainerDashboard.from_config("config/dashboard_business.yaml")

hub = ExplainerHub([db1, db2])
hub_file = "config/hub.yaml"
hub.to_yaml(hub_file)

Then

hub_file = "config/hub.yaml"
hub = ExplainerHub.from_config(hub_file)
hub.run()

When clicking on one dashboard I see the image below

In the logs I see

Registering callbacks...
 * Serving Flask app "explainerdashboard.dashboards" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://127.0.0.1:8050/ (Press CTRL+C to quit)
127.0.0.1 - - [22/Dec/2020 15:31:24] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:31:25] "GET /_dash-component-suites/dash_renderer/[email protected]_8_3m1608565601.14.0.min.js HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:31:25] "GET /_dash-component-suites/dash_renderer/[email protected]_8_3m1608565601.7.2.min.js HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:31:25] "GET /_dash-component-suites/dash_renderer/[email protected]_8_3m1608565601.8.7.min.js HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:31:25] "GET /assets/bootstrap.min.css?m=1608565640.6276863 HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:31:25] "GET /_dash-component-suites/dash_renderer/[email protected]_8_3m1608565601.14.0.min.js HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:31:25] "GET /_dash-component-suites/dash_html_components/dash_html_components.v1_1_1m1608565602.min.js HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:31:25] "GET /_dash-component-suites/dash_core_components/dash_core_components-shared.v1_14_1m1608565602.js HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:31:25] "GET /_dash-component-suites/dash_table/bundle.v4_11_1m1608565601.js HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:31:25] "GET /_dash-component-suites/dash_core_components/dash_core_components.v1_14_1m1608565602.min.js HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:31:25] "GET /_dash-component-suites/dash_bootstrap_components/_components/dash_bootstrap_components.v0_11_1m1608565637.min.js HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:31:25] "GET /_dash-component-suites/dash_renderer/dash_renderer.v1_8_3m1608565601.min.js HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:31:25] "GET /_dash-dependencies HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:31:25] "GET /_dash-layout HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:31:25] "GET /assets/favicon.ico?m=1608565640.628687 HTTP/1.1" 200 -
127.0.0.1 - - [22/Dec/2020 15:32:25] "GET /dashboard1 HTTP/1.1" 308 -
127.0.0.1 - - [22/Dec/2020 15:32:25] "GET /dashboard1/ HTTP/1.1" 403 -

image

Catboost classifier to work with ClassifierExplainer

Naively passed in catboost classifier in the ClassifierExplainer leads to this error message:

Exception: Currently TreeExplainer can only handle models with categorical splits when feature_perturbation="tree_path_dependent" and no background data is passed. Please try again using shap.TreeExplainer(model, feature_perturbation="tree_path_dependent").

After digging into the source code of shap here https://github.com/slundberg/shap/blob/master/shap/explainers/_tree.py before they go on to calculate the shape values, they seem to have done something to correct the conditional sampling for SHAP and CatBoost categorical variables are not supported for now so these lines in the explainer might need a bit of tweaking to yield something like this:
self._shap_explainer = shap.TreeExplainer(self.model)
where internals of the SHAP tree explainer will handle Catboost out of the box.

A few fixes pop up to mind:

  1. expose these inputs for users to set model_output or feature_perturbation
    2.have a special CatBoost classifier class.

I will hot fix this issue in my fork and continue on. Maybe I will discover some new caveats that warrant a new class (for tree plotting perhaps)

Question regarding deployment on Heroku

I just tried to deploy my app on Heroku by directly importing the github project.
However, I did not manage to "add the buildpack" correctly - I'm still generating a slug larger than 500MB. I did

  • add the folder bin from https://github.com/niteoweb/heroku-buildpack-shell.git to my project folder,
  • add the folder .heroku including the file run.sh containing "pip install -y xgboost".
    What am I doing wrong, do I have to add the buildpack somewhere in Heroku itself?

logins with one username and one password

Checking https://explainerdashboard.readthedocs.io/en/latest/deployment.html?highlight=auth#setting-logins-and-password

I was testing with one login and one password.

logins=["U", "P"] doesn't work (see below) but logins=[["U", "P"]] does.

I don't suppose there is a login kwarg? or it can handle a list of len 2? It seems them is coming from dash_auth so I could upstream this there.

  File "src/dashboard_cel.py", line 24, in <module>
    logins=["Celebrity", "Beyond"],
  File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\e\lib\site-packages\explainerdashboard\dashboards.py", line 369, in __init__
    self.auth = dash_auth.BasicAuth(self.app, logins)
  File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\e\lib\site-packages\dash_auth\basic_auth.py", line 11, in __init__
    else {k: v for k, v in username_password_list}
  File "C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\e\lib\site-packages\dash_auth\basic_auth.py", line 11, in <dictcomp>
    else {k: v for k, v in username_password_list}
ValueError: too many values to unpack (expected 2)

Bug: RegressionRandomIndexComponent not robust

I have just managed to kill the whole dashboard because of an error with RegressionRandomIndexComponent, i.e. everything works fine without this component, but enabling it yields to the dashboard only displaying "Error loading layout.". The traceback is below.

I fixed it by appending ".astype('float')" to the data which goes into RegressionExplainer.

Exception on /_dash-layout [GET]
Traceback (most recent call last):
  File "c:\...\lib\site-packages\flask\app.py", line 2447, in wsgi_app
    response = self.full_dispatch_request()
  File "c:\...\lib\site-packages\flask\app.py", line 1952, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "c:\...\lib\site-packages\flask\app.py", line 1821, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "c:\...\lib\site-packages\flask\_compat.py", line 39, in reraise
    raise value
  File "c:\...\lib\site-packages\flask\app.py", line 1950, in full_dispatch_request
    rv = self.dispatch_request()
  File "c:\...\lib\site-packages\flask\app.py", line 1936, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "c:\...\lib\site-packages\dash\dash.py", line 531, in serve_layout
    json.dumps(layout, cls=plotly.utils.PlotlyJSONEncoder),
  File "C:\lib\json\__init__.py", line 238, in dumps
    **kw).encode(obj)
  File "c:\...\lib\site-packages\_plotly_utils\utils.py", line 45, in encode
    encoded_o = super(PlotlyJSONEncoder, self).encode(o)
  File "C:\lib\json\encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "C:\lib\json\encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
TypeError: keys must be str, int, float, bool or None, not numpy.int64

(this shows multiple times)

Feature Request: Option to rename "index"

I imagine the term "index" is confusing to the lay person.

Could the term "index" be changed by the user? e.g. for the titanic example "index" could be changed to "person".

image

Temporary failure in name resolution - linux

Tested on my linux machine

Installation:

$ conda create -n test_env python=3.8
$ conda activate test_env
$ pip install explainerdashboard

and tested this https://explainerdashboard.readthedocs.io/en/latest/index.html#a-more-extended-example

This may be outside of explainerdashboard

>>> from sklearn.ensemble import RandomForestClassifier
>>> 
>>> from explainerdashboard import ClassifierExplainer, ExplainerDashboard
>>> from explainerdashboard.datasets import titanic_survive
>>> 
>>> X_train, y_train, X_test, y_test = titanic_survive()
>>> 
>>> model = RandomForestClassifier(n_estimators=50, max_depth=5)
>>> model.fit(X_train, y_train)
RandomForestClassifier(max_depth=5, n_estimators=50)
>>> 
>>> explainer = ClassifierExplainer(
...                 model, X_test, y_test,
...                 # optional:
...                 cats=['Sex', 'Deck', 'Embarked'],
...                 labels=['Not survived', 'Survived'])
Note: shap=='guess' so guessing for RandomForestClassifier shap='tree'...
Note: model_output=='probability', so assuming that raw shap output of RandomForestClassifier is in probability space...
Generating self.shap_explainer = shap.TreeExplainer(model)
Detected RandomForestClassifier model: Changing class type to RandomForestClassifierExplainer...
>>> 
>>> db = ExplainerDashboard(explainer, title="Titanic Explainer",
...                     whatif=False, # you can switch off tabs with bools
...                     shap_interaction=False,
...                     decision_trees=False)
Building ExplainerDashboard..
Generating layout...
Calculating shap values...
Calculating dependencies...
Calculating permutation importances (if slow, try setting n_jobs parameter)...
Calculating categorical permutation importances (if slow, try setting n_jobs parameter)...
Calculating prediction probabilities...
Calculating predictions...
Calculating pred_percentiles...
Registering callbacks...
>>> db.run(port=8051)
Starting ExplainerDashboard on http://localhost:8051
Dash is running on http://x86_64-conda_cos6-linux-gnu:8051/

 * Serving Flask app "explainerdashboard.dashboards" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/site-packages/explainerdashboard/dashboards.py", line 674, in run
    self.app.run_server(port=port, **kwargs)
  File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/site-packages/dash/dash.py", line 1716, in run_server
    self.server.run(host=host, port=port, debug=debug, **flask_run_options)
  File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/site-packages/flask/app.py", line 990, in run
    run_simple(host, port, self, **options)
  File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/site-packages/werkzeug/serving.py", line 1052, in run_simple
    inner()
  File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/site-packages/werkzeug/serving.py", line 996, in inner
    srv = make_server(
  File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/site-packages/werkzeug/serving.py", line 847, in make_server
    return ThreadedWSGIServer(
  File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/site-packages/werkzeug/serving.py", line 740, in __init__
    HTTPServer.__init__(self, server_address, handler)
  File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/socketserver.py", line 452, in __init__
    self.server_bind()
  File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/http/server.py", line 138, in server_bind
    socketserver.TCPServer.server_bind(self)
  File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/socketserver.py", line 466, in server_bind
    self.socket.bind(self.server_address)
socket.gaierror: [Errno -3] Temporary failure in name resolution

SHAPError with Explainer

Hi!

When trying to replicate the example dashboard, I get the following error:

Here is the code:

from explainerdashboard.explainers import ClassifierExplainer
from explainerdashboard.dashboards import ExplainerDashboard

from explainerdashboard.datasets import titanic_survive, titanic_names

X_train, y_train, X_test, y_test = titanic_survive()
train_names, test_names = titanic_names()

model = RandomForestClassifier(n_estimators=50, max_depth=5)
model.fit(X_train, y_train)

explainer = ClassifierExplainer(
                model, X_test, y_test,
                # optional:
                cats=['Sex', 'Deck', 'Embarked'],
                labels=['Not survived', 'Survived'])

ExplainerDashboard(explainer).run()

SHAPError: Additivity check failed in TreeExplainer! Please ensure the data matrix you passed to the explainer is the same shape that the model was trained on. If your data shape is correct then please report this on GitHub. Consider retrying with the feature_perturbation='interventional' option. This check failed because for one of the samples the sum of the SHAP values was 0.898516, while the model output was -3111090361071495795172390141309047352589574977119107048305877198218705179454949444899246221946253350348130845905799474606115208759423898693485753827793299777443235521296469068241830714822432908526887716104261809192590482006041690112.000000. If this difference is acceptable you can set check_additivity=False to disable this check.

ImportError: cannot import name 'XGBExplainer' from 'explainerdashboard'

ImportError: cannot import name 'XGBExplainer' from 'explainerdashboard' (C:\Users\131416\AppData\Local\Continuum\anaconda3\envs\test_env\lib\site-packages\explainerdashboard\__init__.py)

I'll investigate more when I have time. I'm also on windows.

Edit:

This works: from explainerdashboard.explainers import XGBExplainer but it seems RegressionExplainer can be imported from the top directory: from explainerdashboard import RegressionExplainer and I would expect other explainers to be able to be imported from the top directory.

Coming from

https://github.com/oegedijk/explainerdashboard/blob/master/explainerdashboard/__init__.py#L1

Probably just a design thing (no right or wrong way) so closing.

Problem with ClassifierRandomIndexComponent

I am just recreating the custom Dashboard for Titanic for a Regression Model (same thing I wrote about in the last post).
However, using ClassifierRandomIndexComponent in class CustomPredictionsTab throws an error:

`---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in
----> 1 ExplainerDashboard(explainer, [CustomModelTab, CustomPredictionsTab], title='Stuff').run(port=8051)

c:\users...\venv\lib\site-packages\explainerdashboard\dashboards.py in init(self, explainer, tabs, title, hide_header, header_hide_title, header_hide_selector, block_selector_callbacks, pos_label, fluid, mode, width, height, external_stylesheets, server, url_base_pathname, responsive, logins, port, importances, model_summary, contributions, whatif, shap_dependence, shap_interaction, decision_trees, **kwargs)
414 block_selector_callbacks=block_selector_callbacks,
415 pos_label=pos_label,
--> 416 fluid=fluid, **kwargs)
417 else:
418 tabs = self._convert_str_tabs(tabs)

c:\users...\venv\lib\site-packages\explainerdashboard\dashboards.py in init(self, explainer, tabs, title, hide_title, hide_selector, block_selector_callbacks, pos_label, fluid, **kwargs)
110
111 self.selector = PosLabelSelector(explainer, pos_label=pos_label)
--> 112 self.tabs = [instantiate_component(tab, explainer, **kwargs) for tab in tabs]
113 assert len(self.tabs) > 0, 'When passing a list to tabs, need to pass at least one valid tab!'
114

c:\users...\venv\lib\site-packages\explainerdashboard\dashboards.py in (.0)
110
111 self.selector = PosLabelSelector(explainer, pos_label=pos_label)
--> 112 self.tabs = [instantiate_component(tab, explainer, **kwargs) for tab in tabs]
113 assert len(self.tabs) > 0, 'When passing a list to tabs, need to pass at least one valid tab!'
114

c:\users...\venv\lib\site-packages\explainerdashboard\dashboards.py in instantiate_component(component, explainer, **kwargs)
54
55 if inspect.isclass(component) and issubclass(component, ExplainerComponent):
---> 56 return component(explainer, **kwargs)
57 elif isinstance(component, ExplainerComponent):
58 return component

in init(self, explainer)
91 hide_slider=True, hide_labels=True,
92 hide_pred_or_perc=True,
---> 93 hide_selector=True, hide_button=False)
94
95 self.contributions = ShapContributionsGraphComponent(explainer,

c:\users...\venv\lib\site-packages\explainerdashboard\dashboard_components\connectors.py in init(self, explainer, title, name, hide_title, hide_index, hide_slider, hide_labels, hide_pred_or_perc, hide_selector, hide_button, pos_label, index, slider, labels, pred_or_perc, **kwargs)
65
66 if self.labels is None:
---> 67 self.labels = self.explainer.labels
68
69 if self.explainer.y_missing:

AttributeError: 'RegressionExplainer' object has no attribute 'labels'`

I'm slightly confused about that since it should be the same Component as on the vanilla "What if..." tab, which is working.
Anyway, is there a way to add the "What if..." Tab without e.g. the PDP-plot to the custom dashboard?

Error with sklearn.ensemble.GradientBoostingClassifier

With a model using GradientBoostingClassifier, get an error

AssertionError: len(shap_explainer.expected_value)=1and len(labels)={len(self.labels)} do not match!

Code:

from explainerdashboard.explainers import *
from explainerdashboard.dashboards import *
from explainerdashboard.datasets import *
 
import plotly.io as pio
import os
 
# load classifier data
 
X_train, y_train, X_test, y_test = titanic_survive()
train_names, test_names = titanic_names()
 
# one-line example
 
#from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
 
 
#model = RandomForestClassifier(n_estimators=50, max_depth=5)
model = GradientBoostingClassifier(n_estimators=50, max_depth=5)
model.fit(X_train, y_train)
 
explainer = ClassifierExplainer(model, X_test, y_test)
 
explainer.plot_shap_contributions(index=0)

Understanding depth/topx in ShapContributionsGraphComponent

Depth (resp. topx) is supposed to show the top contributing variables only. However, in my example, only 9 variables are relevant (i.e. abs(SHAP)>0 in total), so that if I choose depth >= 10 every variable is displayed. Is this intended? Shouldn't it show depth variables anyway, e.g. randomly choosing among the variables tying for the last place?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.