Giter Club home page Giter Club logo

ml-testingtools's Introduction

ML-TestingTools

The following reposity contians a collection of tools to test and validate machine learning models and datasets

Model Analysis

Yellowbrick Link

Yellowbrick is a Python library that provides visualizations for model selection, evaluation, and validation. It includes tools for generating confusion matrices, learning curves, ROC curves, and other diagnostic plots. It can be used with various machine learning frameworks, such as scikit-learn and TensorFlow.

Visualizers:

  • Feature Visualization
    • Rank Features
    • PCA Projections
    • ...
  • Regression Visualization
    • Prediction Error Plot
    • Residuals Plot
    • ...
  • Model Selection Visualization
    • Learning Curve
    • Feature Importance
    • ...

Alibi Link

Alibi is a Python library for machine learning model inspection and validation. It provides functions for testing model fairness, robustness, and explainability. It also includes tools for generating adversarial examples and evaluating model sensitivity.

SHAP Link

SHAP is a Python library that provides model interpretability and validation tools. It includes functions for computing feature importance, visualizing decision boundaries, and generating explanations for individual model predictions.

($) ModelOp Center

ModelOp Center is a platform for managing and validating machine learning models in production. It includes functions for monitoring model performance, identifying model drift, and validating models against data distribution shifts.

IBM AI Fairness 360

AI Fairness 360 is an open-source toolkit for detecting and mitigating bias in machine learning models. It includes functions for measuring bias, generating fairness metrics, and applying fairness interventions.

IBM Watson OpenScale

Watson OpenScale is a platform for managing and monitoring machine learning models in production. It includes functions for detecting model drift, validating models against data distribution shifts, and monitoring model explainability.

IBM Adversarial Robustness Toolbox (ART):

ART is an open-source toolkit for testing the robustness of machine learning models against adversarial attacks. It includes functions for generating adversarial examples, measuring model robustness, and applying adversarial defenses.

Certifai:

Certifai is a platform for testing and certifying the safety of machine learning models in production. It includes functions for detecting bias, assessing fairness, and quantifying model safety against specific safety criteria.

Microsoft Counterfactual Fairness:

Counterfactual Fairness is an open-source toolkit for testing and certifying the fairness of machine learning models. It includes functions for generating counterfactual examples, assessing model fairness, and applying fairness interventions.


Data Handling

Great Expectations - Great Expectations is an open-source framework for data validation that can be integrated with various machine learning frameworks. It provides tools for defining data expectations, validating data against those expectations, and generating data documentation.

Databricks MLflow - MLflow is an open-source platform for managing the machine learning lifecycle. It includes tools for tracking experiments, packaging code and models, and deploying models. The MLflow data validation component provides functions for schema validation and data quality checks.

Deequ - Deequ is an open-source library for data quality assessment and monitoring. It provides functions for profiling data, defining constraints, and validating data against those constraints. It can be integrated with Apache Spark for distributed data processing.

Datahub - Datahub is an open-source platform for managing metadata and data assets. It provides tools for cataloging and documenting data, as well as functions for data validation and profiling. It can be integrated with various data processing frameworks, such as Apache Spark and Presto.

Apache Griffin - Apache Griffin is an open-source tool for data quality assessment and monitoring. It provides functions for profiling data, defining rules, and validating data against those rules. It can be integrated with Apache Spark for distributed data processing.

ml-testingtools's People

Contributors

mel-oezkan avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.