首页 > 其他分享 >2021年最新-可解释机器学习相关研究最新论文、书籍、博客、资源整理分享

2021年最新-可解释机器学习相关研究最新论文、书籍、博客、资源整理分享

时间:2023-06-23 19:04:35浏览次数:44  
标签:github Learning 博客 最新 2020 2021 https Tensorflow com


2021年最新-可解释机器学习相关研究最新论文、书籍、博客、资源整理分享_语音识别

    理解(interpret)表示用可被认知(understandable)的说法去解释(explain)或呈现(present)。在机器学习的场景中,可解释性(interpretability)就表示模型能够使用人类可认知的说法进行解释和呈现。[Finale Doshi-Velez]

    机器学习模型被许多人称为“黑盒”。这意味着虽然我们可以从中获得准确的预测,但我们无法清楚地解释或识别这些预测背后的逻辑。但是我们如何从模型中提取重要的见解呢?要记住哪些事项以及我们需要实现哪些功能或工具?这些是在提出模型可解释性问题时会想到的重要问题。

2021年最新-可解释机器学习相关研究最新论文、书籍、博客、资源整理分享_计算机视觉_02

 

 

    所有资源下载地址,见源地址。

    本资源含了近年来热门的可解释人工智能(XAI)的前沿研究。从下图我们可以看到可解释/可解释AI的趋势。关于这个主题的出版物正在蓬勃发展。

2021年最新-可解释机器学习相关研究最新论文、书籍、博客、资源整理分享_机器学习_03

    下图展示了XAI的几个用例。在这里,根据这个数字将出版物分成几个类别。

2021年最新-可解释机器学习相关研究最新论文、书籍、博客、资源整理分享_数据挖掘_04

研究性论文

    The elephant in the interpretability room: Why use attention as explanation when we have saliency methods, EMNLP Workshop 2020

    Explainable Machine Learning in Deployment, FAT 2020

    A brief survey of visualization methods for deep learning models from the perspective of Explainable AI, Information Visualization 2020

    Explaining Explanations in AI, ACM FAT 2019

    Machine learning interpretability: A survey on methods and metrics, Electronics, 2019

    A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI, IEEE TNNLS 2020

    Interpretable machine learning: definitions, methods, and applications, Arxiv preprint 2019

    Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers, IEEE Transactions on Visualization and Computer Graphics, 2019

    Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI, Information Fusion, 2019

    Evaluating Explanation Without Ground Truth in Interpretable Machine Learning, Arxiv preprint 2019

    A survey of methods for explaining black box models, ACM Computing Surveys, 2018

    Explaining Explanations: An Overview of Interpretability of Machine Learning, IEEE DSAA, 2018

    Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI), IEEE Access, 2018

    Explainable artificial intelligence: A survey, MIPRO, 2018

    How Convolutional Neural Networks See the World — A Survey of Convolutional Neural Network Visualization Methods, Mathematical Foundations of Computing 2018

    Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models, Arxiv 2017

    Towards A Rigorous Science of Interpretable Machine Learning, Arxiv preprint 2017

    Explaining Explanation, Part 1: Theoretical Foundations, IEEE Intelligent System 2017

    Explaining Explanation, Part 2: Empirical Foundations, IEEE Intelligent System 2017

    Explaining Explanation, Part 3: The Causal Landscape, IEEE Intelligent System 2017

    Explaining Explanation, Part 4: A Deep Dive on Deep Nets, IEEE Intelligent System 2017

    An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data, Ecological Modelling 2004

    Review and comparison of methods to study the contribution of variables in artificial neural network models, Ecological Modelling 2003

书籍

    Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models, Advances in Deep Learning Chapter 2020

    Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer 2019

    Explanation in Artificial Intelligence: Insights from the Social Sciences, 2017 arxiv preprint

    Visualizations of Deep Neural Networks in Computer Vision: A Survey, Springer Transparent Data Mining for Big and Small Data 2017

    Explanatory Model Analysis Explore, Explain and Examine Predictive Models

    Interpretable Machine Learning A Guide for Making Black Box Models Explainable

    An Introduction to Machine Learning Interpretability An Applied Perspective on Fairness, Accountability, Transparency,and Explainable AI

开源课程

    Interpretability and Explainability in Machine Learning, Harvard University

文章

    We mainly follow the taxonomy in the survey paper and divide the XAI/XML papers into the several branches.

    1. Transparent Model Design

    2. Post-Explanation

    2.1 Model Explanation(Model-level)

    2.2 Model Inspection

    2.3 Outcome Explanation

    2.3.1 Feature Attribution/Importance(Saliency Map)

    2.4 Neuron Importance

    2.5 Example-based Explanations

    2.5.1 Counterfactual Explanations(Recourse)

    2.5.2 Influential Instances

    2.5.3 Prototypes&Criticisms

    Uncategorized Papers on Model/Instance Explanation

    Does Explainable Artificial Intelligence Improve Human Decision-Making?, AAAI 2021

    Incorporating Interpretable Output Constraints in Bayesian Neural Networks, NeuIPS 2020

    Towards Interpretable Natural Language Understanding with Explanations as Latent Variables, NeurIPS 2020

    Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE, NeurIPS 2020

    Generative causal explanations of black-box classifiers, NeurIPS 2020

    Learning outside the Black-Box: The pursuit of interpretable models, NeurIPS 2020

    Explaining Groups of Points in Low-Dimensional Representations, ICML 2020

    Explaining Knowledge Distillation by Quantifying the Knowledge, CVPR 2020

    Fanoos: Multi-Resolution, Multi-Strength, Interactive Explanations for Learned Systems, IJCAI 2020

    Machine Learning Explainability for External Stakeholders, IJCAI 2020

    Py-CIU: A Python Library for Explaining Machine Learning Predictions Using Contextual Importance and Utility, IJCAI 2020

    Machine Learning Explainability for External Stakeholders, IJCAI 2020

    Interpretable Models for Understanding Immersive Simulations, IJCAI 2020

    Towards Automatic Concept-based Explanations, NIPS 2019

    Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead, Nature Machine Intelligence 2019

    Interpretml: A unified framework for machine learning interpretability, arxiv preprint 2019

    All Models are Wrong, but Many are Useful: Learning a Variable’s Importance by Studying an Entire Class of Prediction Models Simultaneously, JMLR 2019

    On the Robustness of Interpretability Methods, ICML 2018 workshop

    Towards A Rigorous Science of Interpretable Machine Learning, Arxiv preprint 2017

    Object Region Mining With Adversarial Erasing: A Simple Classification to Semantic Segmentation Approach, CVPR 2017

    LOCO, Distribution-Free Predictive Inference For Regression, Arxiv preprint 2016

    Explaining data-driven document classifications, MIS Quarterly 2014

评测方法

    Evaluations and Methods for Explanation through Robustness Analysis, arxiv preprint 2020

    Evaluating and Aggregating Feature-based Model Explanations, IJCAI 2020

    Sanity Checks for Saliency Metrics, AAAI 2020

    A benchmark for interpretability methods in deep neural networks, NIPS 2019

    Methods for interpreting and understanding deep neural networks, Digital Signal Processing 2017

    Evaluating the visualization of what a Deep Neural Network has learned, IEEE Transactions on Neural Networks and Learning Systems 2015

Python开源库

    AIF360: https://github.com/Trusted-AI/AIF360, 

    AIX360: https://github.com/IBM/AIX360, 

    Anchor: https://github.com/marcotcr/anchor, scikit-learn 

    Alibi: https://github.com/SeldonIO/alibi 

    Alibi-detect: https://github.com/SeldonIO/alibi-detect 

    BlackBoxAuditing: https://github.com/algofairness/BlackBoxAuditing, scikit-learn 

    Boruta-Shap: https://github.com/Ekeany/Boruta-Shap, scikit-learn 

    casme: https://github.com/kondiz/casme, Pytorch 

    Captum: https://github.com/pytorch/captum, Pytorch, 

    cnn-exposed: https://github.com/idealo/cnn-exposed, Tensorflow 

    DALEX: https://github.com/ModelOriented/DALEX, 

    Deeplift: https://github.com/kundajelab/deeplift, Tensorflow, Keras

    DeepExplain: https://github.com/marcoancona/DeepExplain, Tensorflow, Keras 

    Deep Visualization Toolbox: https://github.com/yosinski/deep-visualization-toolbox, Caffe, 

    Eli5: https://github.com/TeamHG-Memex/eli5, Scikit-learn, Keras, xgboost, lightGBM, catboost etc.

    explainx: https://github.com/explainX/explainx, xgboost, catboost 

    Grad-cam-Tensorflow: https://github.com/insikk/Grad-CAM-tensorflow, Tensorflow 

    Innvestigate: https://github.com/albermax/innvestigate, tensorflow, theano, cntk, Keras 

    imodels: https://github.com/csinva/imodels, 

    InterpretML: https://github.com/interpretml/interpret 

    interpret-community: https://github.com/interpretml/interpret-community 

    Integrated-Gradients: https://github.com/ankurtaly/Integrated-Gradients, Tensorflow 

    Keras-grad-cam: https://github.com/jacobgil/keras-grad-cam, Keras 

    Keras-vis: https://github.com/raghakot/keras-vis, Keras 

    keract: https://github.com/philipperemy/keract, Keras 

    Lucid: https://github.com/tensorflow/lucid, Tensorflow 

    LIT: https://github.com/PAIR-code/lit, Tensorflow, specified for NLP Task 

    Lime: https://github.com/marcotcr/lime, Nearly all platform on Python 

    LOFO: https://github.com/aerdem4/lofo-importance, scikit-learn 

    modelStudio: https://github.com/ModelOriented/modelStudio, Keras, Tensorflow, xgboost, lightgbm, h2o 

    pytorch-cnn-visualizations: https://github.com/utkuozbulak/pytorch-cnn-visualizations, Pytorch 

    Pytorch-grad-cam: https://github.com/jacobgil/pytorch-grad-cam, Pytorch 

    PDPbox: https://github.com/SauceCat/PDPbox, Scikit-learn 

    py-ciu:https://github.com/TimKam/py-ciu/, 

    PyCEbox: https://github.com/AustinRochford/PyCEbox 

    path_explain: https://github.com/suinleelab/path_explain, Tensorflow 

    rulefit: https://github.com/christophM/rulefit, 

    rulematrix: https://github.com/rulematrix/rule-matrix-py, 

    Saliency: https://github.com/PAIR-code/saliency, Tensorflow 

    SHAP: https://github.com/slundberg/shap, Nearly all platform on Python  

    Skater: https://github.com/oracle/Skater 

    TCAV: https://github.com/tensorflow/tcav, Tensorflow, scikit-learn 

    skope-rules: https://github.com/scikit-learn-contrib/skope-rules, Scikit-learn 

    TensorWatch: https://github.com/microsoft/tensorwatch.git, Tensorflow 

    tf-explain: https://github.com/sicara/tf-explain, Tensorflow 

    Treeinterpreter: https://github.com/andosa/treeinterpreter, scikit-learn, 

    WeightWatcher: https://github.com/CalculatedContent/WeightWatcher, Keras, Pytorch 

    What-if-tool: https://github.com/PAIR-code/what-if-tool, Tensorflow

    XAI: https://github.com/EthicalML/xai, scikit-learn 

    Related Repositories

    https://github.com/jphall663/awesome-machine-learning-interpretability, 

    https://github.com/lopusz/awesome-interpretable-machine-learning, 

    https://github.com/pbiecek/xai_resources, 

    https://github.com/h2oai/mli-resources, 

标签:github,Learning,博客,最新,2020,2021,https,Tensorflow,com
From: https://blog.51cto.com/u_13046751/6538882

相关文章

  • 2021年3月最新-李沐-动手学深度学习第二版-中、英文版
        阿斯顿·张、李沐联合编写的,面向中文读者的能运行、可讨论的深度学习教科书《动手学深度学习》又更新了。 【关注第二版更新】 英文版前八章已翻译至中文版第二版,并含多种深度学习框架的实现。英文版还新增了注意力机制、BERT、 自然语言推理、 推荐系统和深度学习的数......
  • 2021ML实战新-深度学习速成2021
    课程描述    该课程为深度学习提供了实用的入门知识,包括理论动机以及如何在实践中进行实践。作为课程的一部分,我们将介绍多层感知器,反向传播,自动微分和随机梯度下降。此外,我们介绍了用于图像处理的卷积网络,从简单的LeNet到更新的体系结构(例如ResNet),以提供高度精确的模型。其次,......
  • 机器学习最新必读-机器学习基础第二版
    本书介绍    本书主要介绍机器学习的相关知识,可以作为研究人员的参考书和学生的教科书。它涵盖了机器学习的基本主题,同时为算法的讨论和论证提供了理论基础和概念工具。它还描述了这些算法应用的几个关键方面。    我们的目标是提出最新颖的理论工具和概念,同时给出简洁的证......
  • 蒂宾根大学-机器学习导论2021
    课程描述    该课程是机器学习的基础入门,涵盖回归,分类,优化,正则化,聚类和降维的关键概念。该课程面向神经科学和其他科学领域的硕士生。本课程假定您对微积分,概率论和线性代数(矩阵)有基本了解。 课程大纲  课程视频截图......
  • 图神经网络(GNN)经典论文、算法、公开数据集、经典博客等资源整理分享
        神经网络的迅速发展,也推动着将神经网络运用到图这一特殊的数据结构的相关研究。    图是一种非欧式结构的结构化数据,它由一系列的对象(nodes)和关系类型(edges)组成,具有局部连接的特点,能表示更为复杂的信息;熟悉和运用图神经网络的方法很有必要。 ......
  • 21年最新-数据科学与工程驱动力-机器学习,动态系统与控制
    本书介绍    数据驱动算法正在彻底改变复杂系统的建模,预测和控制。这本教科书将机器学习,工程数学和数学物理学结合在一起,将动力学系统的建模和控制与数据科学中的现代方法集成在一起。它重点介绍了科学计算领域的最新进展,这些进展使数据驱动的方法可以应用于各种复杂系统,例如湍......
  • 李宏毅最新-深度学习/机器学习课程2021-课程视频及ppt
    课程描述    由国立台湾大学李宏毅老师主讲的纯中文版,2021年机器学习(深度学习)开课了,课程相关的部分资源已经release出来了,今年课程新增了很多新的前沿的内容,分享给大家。  课程大纲课程主页https://speech.ee.ntu.edu.tw/~hylee/ml/2021-spring.html 课程视频截图 ......
  • 2021年最新-深度学习必备基础理论
    本书介绍    本简书主要所包含两个部分的内容。   1.是为文献中出现的内容提供简化的证明,希望将困难的事情简化为适合一堂课的内容。        2.主要关注的是通过标准(典型的ReLU)前馈网络实现IID数据二进制分类的低测试误差。               ......
  • Tensorflow 2.0历史最全资源中文版整理分享-教程、博客、代码和视频教程
        目录    TensorFlow2.0有哪些优势?    官方网站    介绍    SampleCodes/项目        o基础项目        o特定模型/任务(例如GAN,RL,NLP等)            强化学习            GAN            自然语言处......
  • 博客3
    课程管理系统及其他作业6-8前言:难度上:由于项目的逐渐增加,题目要求变多,测试点变多,客观而言是不断递增的。主观上而言,第一次,三次是比较难的。由于第一次是开始搭建一个程序,所以是相对难的,并且其给的测试点相对较少而且其测试点的覆盖面比较的窄,只能覆盖一部分的运行样例。第三次......