Reverse Dependencies of lime
The following projects have a declared dependency on lime:
- aif360 — IBM AI Fairness 360
- aif360-fork2 — IBM AI Fairness 360
- alphaml — Build a CLETE Binary Classification Model
- antakia-core — Core modules for AntakIA
- argueview — ArgueView is a tool for generating text-based presentations for machine-learning predictions and feature-importance based explanation tools. The tool makes use of Toulmin's model of argumentation for structuring the text-based explanations.
- arthurai — Arthur Python SDK
- aucmedi — AUCMEDI - a framework for Automated Classification of Medical Images
- AuDoLab — With AuDoLab you can do LDA on highly imbalanced datasets.
- azureml-contrib-explain-model — no summary
- azureml-contrib-interpret — ML contrib interpret package contains experimental functionality to interpret ML models
- bem — Random forest for exoplanets
- calibrated-explanations — Extract calibrated explanations from machine learning models.
- carla-recourse — A library for counterfactual recourse
- cdt-test — This is a cdt test to assist doctor
- classification-text-email — compiled packages
- classmail — A simple framework for automatise mail classification task
- contextual-ai — Contextual AI
- d3m-interface — Library to use D3M AutoML Systems
- datto — Data Tools (Dat To)
- dianna — Deep Insight And Neural Network Analysis
- Djaizz — Artificial Intelligence (AI) in Django Applications
- dynamask — Dynamask - Explaining Time Series Predictions with Dynamic Masks
- easul — Embeddable AI and State-based Understandable Logic toolkit
- email-txt-classification — compiled packages
- emxps — Miscellanous Explanation Methods
- ESMValTool — ESMValTool: A community diagnostic and performance metrics tool for routine evaluation of Earth system models in CMIP.
- EtaML — An automated machine learning platform with a focus on explainability
- evalml — an AutoML library that builds, optimizes, and evaluates machine learning pipelines using domain-specific objective functions
- exlib — Toolkit for explainability by BrachioLab
- Explain-LISA-CNN-Research — Unified Explanation Provider For CNNs
- Explain-LISA-CNN-test-4 — Unified Explanation Provider For CNNs
- explainableai — A comprehensive package for Explainable AI and model interpretation
- expybox — Jupyter notebook toolbox for model interpretability/explainability
- fasttreeshap — A fast implementation of TreeSHAP algorithm.
- ferret-xai — A python package for benchmarking interpretability approaches.
- flass — Train Keras Convolutional Neural Network for image classification
- genomap — Genomap converts tabular gene expression data into spatially meaningful images.
- giotto-time — Toolbox for Time Series analysis and integration with Machine Learning.
- h1st — Human-First AI (H1st)
- h1st-contrib — Human-First AI (H1st)
- holistic — no summary
- holisticai — no summary
- insolver — Insolver is low-code machine learning library, initially created for the insurance industry.
- interpret-community — Microsoft Interpret Extensions SDK for Python
- interpret-core — Fit interpretable machine learning models. Explain blackbox machine learning.
- InterpretME — An interpretable machine learning pipeline over knowledge graphs
- IREX — no summary
- keras-explain — Explanation toolbox for Keras models.
- LASExplanation — This package is a brief wrap-up toolkit built based on 2 explanation packages: LIME and SHAP. The package contains 2 explainers: LIMEBAG and SHAP. It takes data and fitted models as input and returns explanations about feature importance ranks and/or weights. (etc. what attributes matter most within the prediction model).
- lime-stability — A package to evaluate Lime stability
- LISA-CNN-ExplainerV1 — Unified Explanation Provider For CNNs
- LISA-CNN-ExplainerV2 — Unified Explanation Provider For CNNs
- LISA-CNN-ExplainerV3 — Unified Explanation Provider For CNNs
- LISA-CNN-ExplainerV4 — Unified Explanation Provider For CNNs
- LISA-CNN-ExplainerV5 — Unified Explanation Provider For CNNs
- llama-lime — Explainable AI with Large Language Models
- lohrasb — This versatile tool streamlines hyperparameter optimization in machine learning workflows.It supports a wide range of search methods, from GridSearchCV and RandomizedSearchCVto advanced techniques like OptunaSearchCV, Ray Tune, and Scikit-Learn Tune.Designed to enhance model performance and efficiency, it's suitable for tasks of any scale.
- mdml — Application of Deep learning on molecular dymanamics trajectories
- MlTrackTool — Documetation and Ml tracking tool using jupyter extensions
- modelxplain — A package for enhancing model interpretability for machine learning models.
- neurojit — A Python package for calculating the commit understandability features of Java projects.
- oban-classifier — OBAN Classifier: A Skorch-based flexible neural network for binary and multiclass classification
- omnixai — OmniXAI: An Explainable AI Toolbox
- omnixai-community — OmniXAI: An Explainable AI Toolbox
- pear-xai — PEAR: Post-hoc Explainer Agreement Regularization
- PiML — A low-code interpretable machine learning toolbox in Python.
- pm4pybpmn — Process Mining for Python - BPMN support
- pou-shap — A unified approach to explain the output of any machine learning model.
- prolothar-rule-mining — algorithms for prediction and rule mining on event sequences
- punditkit — PunditKit: A GUI for Scikit-Learn Models
- pyreal — Library for evaluating and deploying human readable machine learning explanations.
- pytolemaic — Package for ML model analysis
- recsystem — The package for Home Assistant Integration
- RIM-interpret — Interpretability metrics for machine learning models
- RobustifyToolkit — Robustify:
- SAInTool — SAInT: An Interactive Tool for Sensitivity Analysis In The Loop
- sdqcpy — SDQCPy is a comprehensive Python package designed for synthetic data management, quality control, and validation.
- shap — A unified approach to explain the output of any machine learning model.
- shap-legacy — A unified approach to explain the output of any machine learning model.
- shapash — Shapash is a Python library which aims to make machine learning interpretable and understandable by everyone.
- shaperone — Shaperone is a fork of the SHAP library, fixing open issues to improve usability.
- singa-easy — The SINGA-EASY
- skift — scikit-learn wrappers for Python fastText
- smace — Semi-Model-Agnostic Contextual Explainer library
- smartpyml — smartpyml: A Comprehensive Machine Learning Library
- sparx-lib — Sparx Implementation
- sphynxml — no summary
- symbolic-pursuit — Learning outside the black-box: at the pursuit of interpretable models
- trelawney — Generic Interpretability package
- trustML — Trust for Machine Learning
- volkanoban — A powerful stacking classifier framework that integrates advanced machine learning techniques, overfitting prevention, and explainability features such as LIME, SHAP, and model interpretation dashboards.
- wxyz-notebooks — notebook demos for experimental Jupyter widgets
- xai-compare — This repository aims to provide tools for comparing different explainability methods, enhancing the interpretation of machine learning models.
- XaI-Ensemble-API — A library for do the explaination of data using a new ensenble of XAI methods
- XaI-Ensemble-VOCs-API — A library for do the explaination of data using a new ensenble of XAI methods
- xai-explainer — A package for explaining deep learning models
- xai-feature-selection — Feature selection using XAI
- XAI-Library — An integrated library for explanation methods.
- xai-metrics — A package for analysis and evaluating metrics for Explainable AI (XAI)
- XeroML — A data management platform
1
2