Giter Club home page Giter Club logo

anchorsonr's Introduction

Build Status License

anchorsOnR

This package implements the Anchors XAI algorithm as proposed by Marco Tulio Ribeiro (2018). The original paper "Anchors: High-Precision Model-Agnostic Explanations" can be found here. It provides a short characterization of anchors, which reads as follows:

An anchor explanation is a rule that sufficiently “anchors” the prediction locally – such that changes to the rest of the feature values of the instance do not matter. In other words, for instances on which the anchor holds, the prediction is (almost) always the same.

The anchor method is able to explain any black box classifier, with two or more classes. All we require is that the classifier implements a function that takes [a data instance] and outputs [an integer] prediction.

Thus, anchors are highly precise explanations in form of human readable IF-THEN rules, that describe which values caused the model's outcome for one specific instance. They prodive a clear coverage, i.e., state exactly to which other instances the apply.

This R package interfaces the anchorJ Java Implementation.

Getting Started

These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.

Prerequisites

The R package requires a Java SE Runtime Environment (JRE) with Java version 8 or higher.

If you want to fiddle around with the anchorsOnR source code, make sure to have the devtools R-package installed.

install.packages("devtools")

Installing anchorsOnR

Now, install the anchors package directly from github as follows:

devtools::install_github("viadee/anchorsOnR")

The following dependencies are required to use this package (unmodified, distributed and maintained by their respective authors through the established channels such as CRAN):

  • checkmate (BSD 3 clause)
  • jsonlite (MIT)
  • BBmisc (BSD 3 clause)
  • uuid (MIT)
  • magrittr (MIT)

Using the Algorithm

The anchors API was designed in the style of the lime R package. The best way to illustrate the process of model-agnostic explanation in anchors is by example. Assume we aim to understand predictions made on the iris dataset.

Obtaining a Model

Towards that goal, we first train an mlr learner as a black box model to explain

library(anchors)
library(mlr)

data(iris)

# our goal is to predict the species
task = makeClassifTask(data = iris, target = "Species", id = "iris")

# setting up a learner
lrn = makeLearner("classif.rpart")

# train the learner on the training set
model = train(learner = lrn, task = task)

The created decision tree can easily be visualized and thus the algorithm's results be compared and validated. Nonetheless, the approach and is model agnostic, which means any other model could be explained. This also includes such that are not inherently visualizable.

[^1]As mentioned before, explaining a decision tree is of little use in practice as it includes explainability in its structure. Therefore, we consider this model a black box.

Iris decision tree visualized

Calling anchorsOnR

Having created a model whose behavior is to be explained, we can obtain the explanations by first creating an explainer and using it on a specific instance (or multiple instances):

explainer = anchors(iris, model, target = "Species")

explanations = explain(iris[100,], explainer)

The explain function spins up and eventually closes a background JVM in which the anchor server is tasked with determining the anchors in your dataset.

The explanation can be printed and looks similar to the following output:

printExplanations(explainer, explanations)

# ====Explained Instance 100 ====
# Sepal.Length = 5.7
# Sepal.Width = 2.8
# Petal.Length = 4.1
# Petal.Width = 1.3
# WITH LABEL  = 'versicolor'
# ====Result====
# IF Petal.Length = 4.1 (ADDED PRECISION: 0.1736, ADDED COVERAGE: -0.085) AND
# Petal.Width = 1.3 (ADDED PRECISION: 0.8263, ADDED COVERAGE: -0.913)
# THEN PREDICT 'versicolor'
# WITH PRECISION 1 AND COVERAGE 0.002

It becomes obvious why this approach is called Anchors: its result are rules, that describes the decision making of a machine learning model anchored around a particular instance of interest, while generalizing to as many other instances as possible.

We can check the result with the visualized decision tree and see that the anchor in fact explains the model locally.

Discretization

The previous example shows one of anchors' disadvantages. Rules get very specific for numeric values and thus, coverage is low. Discretization helps to group multiple values into one class that gets used as a proxy feature by anchors. This way, we can obtain anchors that generalize superiorly.

We can simply define the cut points for each feature and pass it to anchors:

bins = list()
bins[[1]] = list(cuts = c(4.3, 5.4, 6.3, 7.9))
bins[[2]] = list(cuts = c(2.0, 2.9, 3.2, 4.4))
bins[[3]] = list(cuts = c(1, 2.6333, 4.9, 6.9))
bins[[4]] = list(cuts = c(0.1, 0.8666, 1.6, 2.5))

explainer = anchors(iris, model, target = "Species", bins = bins)

explanations = explain(iris[100,], explainer)

The output looks different now. Being less specific and having a higher coverage, this rule applies to more instances than before and is more easy to interpret.

printExplanations(explainer, explanations)

# ====Result====
# IF Petal.Length IN [2.6333,4.9) (ADDED PRECISION: 0.1676, ADDED COVERAGE: -0.251) AND
# Petal.Width IN [0.8666,1.6) (ADDED PRECISION: 0.8323, ADDED COVERAGE: -0.635)
# THEN PREDICT 'versicolor'
# WITH PRECISION 1 AND COVERAGE 0.114

Extending Model Support

By default, anchors supports a variety of machine learning packets and model classes, such as:

  • lda
  • mlr
  • keras
  • h2o

However, the prefered model of your choice might not be included in this list. In order to explain an arbitraty machine learning model, anchors needs to be able to retrieve predictions from that model in a standardised way. Furthermore, it requires information as to whether it is a classification or regression model. To cater the former, anchors calls the predict_model() generic which the user is free to supply methods for. For the latter, the model must respond to the model_type() generic. See models.Rfor examples on how to do write corresponding methods.

Authors

License

BSD 3-Clause License

anchorsonr's People

Contributors

fkoehne avatar magdalenalang1 avatar thllwg avatar tobiasgoerke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

anchorsonr's Issues

Refactor Perturbation

The perturbation code contains many "historical" legacy burdens. Based on outdated considerations, it was designed to be as flexible as possible so that different perturbation functions can be used for tabular data, images, etc. Thus, the perturbation code is accordingly complex. Since we now assume that the perturbation functions essentially exist as they are (i.e. one perturbation function for tabular data), the code could be significantly streamlined.

Make explained prediction settable

The explained prediction is set in a_dataframe.R by
prediction = predict_model(explainer$model, instance, type = o_type)

The user needs to be able to customly set the parameter he or she would like to explained. Thus, make this a parameter.

Discretization options are insufficient

One can control the parameters bin_contiuous, n_bins, quantile_bins and use_density to discretize variables.
To my understanding, all variables are then discretized using the same options.

This, however, does not suffice:

  1. in most cases, variables need to be discretized differently.
  2. there need to be more options than these few.

In other words: the user needs full control over the discretization settings.

We have a few options here (non exhaustive list):

  1. submit a second dataset which contains the discretized values
  2. submit a collection of discretization functions

Which other options can you think of? Which would you prefer?

CRAN release

  • Create required meta files
  • Pick a version number.
  • Run and document R CMD check.
  • Check that you’re aligned with CRAN policies.
  • Update README.md and NEWS.md.
  • Submit the package to CRAN.
  • Prepare for the next version by updating version numbers.
  • Publicise the new version.

Move discretization to own class

bin_contiuous, n_bins, quantile_bins and use_density should be moved elsewhere and produce a discretized dataset that can then be passed to anchors.

Streamlined explanations / More print options

If multiple cases are explained, there should be a possibility to streamline explanations printouts. For instance, if several 'setosa' instances are explained, the resulting anchors - if similar/identical could be unified.

Example:
====Result====
IF Petal.Length IN [1,2.63333333333333) (ADDED PRECISION: 1, ADDED COVERAGE: -0.509)
THEN PREDICT 'setosa'
WITH PRECISION 1 AND COVERAGE 0.491
====For Explained Instances 15, 18 ====
====15====
Sepal.Length = 5.8
Sepal.Width = 4
Petal.Length = 1.2
Petal.Width = 0.2
WITH LABEL = 'setosa'
====18====
Sepal.Length = 5.8
Sepal.Width = 4
Petal.Length = 1.2
Petal.Width = 0.2
WITH LABEL = 'setosa'

Another nice-to-have would be the possibility to just print the rules without the instance details, e.g. via a verbosity parameter?

Enable multithreading

The application would greatly benefit from utilizing multiple threads. For R, this is no easy task and comes with plenty overhead. However, there are multiple options we should think about. There could be multiple threads, for example, listening to incoming calls to create multiple local explanations in parallel.

Contrary to #16, this issue is not about reducing communication overhead but actually enabling threading.

Merge Adapters

Merge javaAnchorAdapter changes as preparation for jar download

Separation of concerns

A large share of the application's logic lies currently within a few classes such as a_dataframe.R. This should be split up as it resembles very different concepts.

Create a new Unsupervised-Discretizer

Since we did not find a non-GPL CRAN-hosted package, that discretizes continuous data in an unsupervised manner #44 , an unsupervised discretizer should be implemented directly.
It could resemble the implementations of viadee in the 'javaAnchorAdapters' package.
https://github.com/viadee/javaAnchorAdapters/tree/master/DefaultConfigsExtension/src/main/java/de/viadee/xai/anchor/adapter/tabular/discretizer/impl

Useful discretizers could be equalfrequency (PercentileMedianDiscretizers) and manual cutpoints (ManualDiscretizer).

and should discretize as shown in the tests:

https://github.com/viadee/javaAnchorAdapters/tree/master/DefaultConfigsExtension/src/test/java/de/viadee/xai/anchor/adapter/tabular/discretizer

Coverage identification and performance issues

Coverage identification does not work properly, as of now.
Additionally, the perturbations needed to calculate coverage are created each time the coverage identification is called. This can happen once at initialization time.

Create a new supervised discretizer

Since we did not find a non-GPL CRAN-hosted package, that discretizes continuous data in a supervised manner #44 , a supervised discretizer should be implemented directly.
It could resemble the implementations of viadee in the 'javaAnchorAdapters' package.
https://github.com/viadee/javaAnchorAdapters/tree/master/DefaultConfigsExtension/src/main/java/de/viadee/xai/anchor/adapter/tabular/discretizer/impl

Useful discretizers could be FUSINTER (FUSINTERDiscretizer.java) see:
FUSINTER_A_Method_for_Discretization_of_Continuous.pdf

or Ameva (AmevaDiscretizer.java) see:
AMEVA 2009-Gonzalez-Abril-ESWA.pdf

or another discretizer, as supervised discretizations are currently being implemented and evaluated for the Java implementation of Anchors.

These should discretize as shown in the tests:

https://github.com/viadee/javaAnchorAdapters/tree/master/DefaultConfigsExtension/src/test/java/de/viadee/xai/anchor/adapter/tabular/discretizer/impl

Explanations without train set

Why does a train set need to be passed to anchors()?
Anchors is able to run without a data set. Just the default perturbation should require a dataset

Generate multiple perturbations at once

The current perturbation generation function is in need of performance improvement.
Generating all perturbations that were asked for at once would help to reduce the total runtime.

Decouple jar from project

The java jar lies currently within the project.
The file needs to be removed and the current version downloaded from the maven central repository.

Printed explanation label differs from result

E.g.:

====Explained Instance 100 ==== Sepal.Length = 5.7 Sepal.Width = 2.8 Petal.Length = 4.1 Petal.Width = 1.3 WITH LABEL Species = 'versicolor' ====Result==== IF Petal.Width IN INLC RANGE [0.867,1.6) (ADDED PRECISION: 0.910299003322259, ADDED COVERAGE: -0.391) THEN PREDICT '1' ('setosa') WITH PRECISION 0.910299003322259 AND COVERAGE 0.609

shorten JVM waiting time

In R/connections.R -> initAnchors() the JVM is started. Til now, the function just falls asleep for 5 seconds to let the JVM start. Clearly not a good solution. We need to find a dynamic way to handle this.

.anchors.startJar(ip = ip, port = port, name = name, ice_root = tempdir(), stdout = stdout, bind_to_localhost = FALSE, log_dir = NA, log_level = NA, context_path = NA)

Sys.sleep(5L)

Increase parallelizable workload

It seems there is high relative communication overhead.
We should try to communicate less with the Java backend.

Idea: Increase the initSampleCount, use KL_LUCB with high batchSize. Then, enable parallelization in anchorsOnR.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.