Computer Science ›› 2024, Vol. 51 ›› Issue (2): 245-251.doi: 10.11896/jsjkx.230300028

• Artificial Intelligence • Previous Articles     Next Articles

Local Interpretable Model-agnostic Explanations Based on Active Learning and Rational Quadratic Kernel

ZHOU Shenghao, YUAN Weiwei, GUAN Donghai   

  1. College of Computer Science and Technology,Nanjing University of Aeronautics and Astronautics,Nanjing 211100,China
  • Received:2023-03-03 Revised:2023-06-25 Online:2024-02-15 Published:2024-02-22
  • About author:ZHOU Shenghao,born in 1999,master.His main research interests include data mining and machine learninginterpre-tability.YUAN Weiwei,born in 1981,Ph.D,professor.Her main research interests include data mining and intelligence computing.
  • Supported by:
    National Defense Basic Scientific Research program of China(JCKY2020204C009).

Abstract: With the widespread use of deep learning models,people are more aware that the decision-making of model is a problem that needs to be solved urgently.Complex and difficult-to-interpret black-box models hinder the deployment of algorithms in actual scenarios.LIME is the most popular method of local interpretation,but the resulting perturbed data is unstable,leading to bias in the final explanation.To solve the above problems,local interpretable model-agnostic explanations based on active learning and rational quadratic kernel,ActiveLIME,is proposed,which makes the local interpretable model more faithful to the original classifier.After ActiveLIME generates the perturbed data,it samples the perturbation through the query strategy of active lear-ning,selects the perturbation with high uncertainty for training,and uses the local model with the highest accuracy in the iteration to generate explanations for the instances of interest.And for high-dimensional sparse samples that are prone to local overfitting,a rational quadratic kernel is introduced into model’s loss function to reduce overfitting.Experiments indicate that the proposed ActiveLIME has better local fidelity and quality of explanations than traditional local explanation algorithms.

Key words: Local explanation, Perturbation sampling, Query strategy of active learning, Rational quadratic kernel

CLC Number: 

  • TP391
[1]BAI X,WANG X,LIU X,et al.Explainable deep learning for efficient and robust pattern recognition:A survey of recent deve-lopments[J].Pattern Recognition,2021,120:108102.
[2]BREIMAN L.Classification and regression trees[M].Rout-ledge,2017.
[3]LINARDATOS P,PAPASTEFANOPOULOS V,KOTSIAN-TIS S.Explainable ai:A review of machine learning interpre-tability methods[J].Entropy,2020,23(1):18.
[4]DU M,LIU N,HU X.Techniques for interpretable machine learning[J].Communications of the ACM,2019,63(1):68-77.
[5]WANG F,KAUSHAL R,KHULLAR D.Should health care demand interpretable artificial intelligence or accept “black box” medicine?[J].Annals of Internal Medicine,2020,172(1):59-60.
[6]SHANKARANARAYANA S M,RUNJE D.ALIME:Autoencoder based approach for local interpretability[C]//Intelligent Data Engineering and Automated Learning-IDEAL 2019:20th International Conference,Manchester,UK,November 14-16,2019,Proceedings,Part I 20.Springer International Publishing,2019:454-463.
[7]RIBEIRO M T,SINGH S,GUESTRIN C.“Why should I trustyou?” Explaining the predictions of any classifier[C]//Procee-dings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.2016:1135-1144.
[8]LAKKARAJU H,BACH S H,LESKOVEC J.Interpretable decision sets:A joint framework for description and prediction[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.2016:1675-1684.
[9]SETTLES B.Active learning literature survey[R].ComputerSciences Technical Report 1648,University of Wisconsin-Madison,2009.
[10]MUSLEA I,MINTON S,KNOBLOCK C A.Active learningwith multiple views[J].Journal of Artificial Intelligence Research,2006,27:203-233.
[11]SEUNG H S,OPPER M,SOMPOLINSKY H.Query by committee[C]//Proceedings of the fifth Annual Workshop on Computational Learning Theory.1992:287-294.
[12]ZAFAR M R,KHAN N.Deterministic local interpretable mo-del-agnostic explanations for stable explainability[J].Machine Learning and Knowledge Extraction,2021,3(3):525-541.
[13]RANJBAR N,SAFABAKHSH R.Using decision tree as local interpretable model in autoencoder-based lime[C]//2022 27th International Computer Conference.Computer Society of Iran(CSICC),IEEE,2022:1-7.
[14]LAUGEL T,RENARD X,LESOT M J,et al.Defining locality for surrogates in post-hoc interpretablity[J].arXiv:1806.07498,2018.
[15]RIBEIRO M T,SINGH S,GUESTRIN C.Anchors:High-precision model-agnostic explanations[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2018.
[16]RIBEIRO M T,SINGH S,GUESTRIN C.Nothing else mat-ters:Model-agnostic explanations by identifying prediction invariance[J].arXiv:1611.05817,2016.
[17]ZHAO X,HUANG W,HUANG X,et al.Baylime:Bayesian local interpretable model-agnostic explanations[C]//Uncertainty in Artificial Intelligence(PMLR).2021:887-896.
[18]ADLER P,FALK C,FRIEDLER S A,et al.Auditing black-box models for indirect influence[J].Knowledge and Information Systems,2018,54:95-122.
[19]BRAMHALL S,HORN H,TIEU M,et al.Qlime-a quadratic local interpretable model-agnostic explanation approach[J].SMU Data Science Review,2020,3(1):4.
[20]HOFMANN T,SCHÖLKOPF B,SMOLA A J.Kernel methods in machine learning[J].The Annals of Statistics,2008,36(3):1171.
[21]JANOCHA K,CZARNECKI W M.On loss functions for deep neural networks in classification[J].arXiv:1702.05659,2017.
[22]GUO G,WANG H,BELL D,et al.KNN model-based approach in classification[C]//OTM Confederated International Confe-rences on The Move to Meaningful Internet Systems 2003:CoopIS,DOA,and ODBASE.2003:986-996.
[23]NIELSEN F,NIELSEN F.Hierarchical clustering[J/OL].Introduction to HPC with MPI for Data Science,2016:195-211.https://link.springer.com/chapter/10.1007/978-3-319-21903-5_8.
[24]VILONE G,LONGO L.Notions of explainability and evaluation approaches for explainable artificial intelligence[J].Information Fusion,2021,76:89-106.
[25]GRANDINI M,BAGLI E,VISANI G.Metrics for multi-class classification:an overview[J].arXiv:2008.05756,2020.
[26]JIA Y,BAILEY J,RAMAMOHANARAO K,et al.Improving the quality of explanations with local embedding perturbations[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.2019:875-884.
[27]KEERTHI S S,LIN C J.Asymptotic behaviors of support vector machines with Gaussian kernel[J].Neural Computation,2003,15(7):1667-1689.
[1] WANG Hancheng, DAI Haipeng, CHEN Zhipeng, CHEN Shusen, CHEN Guihai. Large-scale Network Community Detection Algorithm Based on MapReduce [J]. Computer Science, 2024, 51(4): 11-18.
[2] MAO Chenyu, HUANG He, SUN Yu'e, DU Yang. Global Top-K Frequent Flow Measurement for Continuous Periods in Distributed Networks [J]. Computer Science, 2024, 51(4): 28-38.
[3] XU Chenhan, HUANG He, SUN Yu'e, DU Yang. Multi-level Pruning Obfs4 Obfuscated Traffic Recognition Method Based on Partial Data [J]. Computer Science, 2024, 51(4): 39-47.
[4] HAO Meng, TIAN Xueyang, LU Gangzhao, LIU Yi, ZHANG Weizhe, HE Hui. Transplantation and Optimization of Graph Matching Algorithm Based on Domestic DCUHeterogeneous Platform [J]. Computer Science, 2024, 51(4): 67-77.
[5] ZHANG Liying, SUN Haihang, SUN Yufa , SHI Bingbo. Review of Node Classification Methods Based on Graph Convolutional Neural Networks [J]. Computer Science, 2024, 51(4): 95-105.
[6] XI Ying, WU Xuemeng, CUI Xiaohui. Node Influence Ranking Model Based on Transformer [J]. Computer Science, 2024, 51(4): 106-116.
[7] HONG Yu, CHEN Hongchang, ZHANG Jianpeng, HUANG Ruiyang , LI Shaomei. Graph Sampling Algorithm Based on Representative Node Expansion to Maintain CommunityStructure [J]. Computer Science, 2024, 51(4): 117-123.
[8] KANG Wei, LI Lihui, WEN Yimin. Semi-supervised Classification of Data Stream with Concept Drift Based on Clustering Model Reuse [J]. Computer Science, 2024, 51(4): 124-131.
[9] ZHANG Tao, LIAO Bin, YU Jiong, LI Ming, SUN Ruina. Benchmarking and Analysis for Graph Neural Network Node Classification Task [J]. Computer Science, 2024, 51(4): 132-150.
[10] LIN Binwei, YU Zhiyong, HUANG Fangwan, GUO Xianwei. Data Completion and Prediction of Street Parking Spaces Based on Transformer [J]. Computer Science, 2024, 51(4): 165-173.
[11] WANG Xu, LIU Changhong, LI Shengchun, LIU Shuang, ZHAO Kangting, CHEN Liang. Study on Manufacturing Company Automated Chart Analysis Method Based on Natural LanguageGeneration [J]. Computer Science, 2024, 51(4): 174-181.
[12] WANG Ruiping, WU Shihong, ZHANG Meihang, WANG Xiaoping. Review of Vision-based Neural Network 3D Dynamic Gesture Recognition Methods [J]. Computer Science, 2024, 51(4): 193-208.
[13] XU Hao, LI Fengrun, LU Lu. Metal Surface Defect Detection Method Based on Dual-stream YOLOv4 [J]. Computer Science, 2024, 51(4): 209-216.
[14] LIU Zeyu, LIU Jianwei. Video and Image Salient Object Detection Based on Multi-task Learning [J]. Computer Science, 2024, 51(4): 217-228.
[15] SONG Hao, MAO Kuanmin, ZHU Zhou. Algorithm of Stereo Matching Based on GAANET [J]. Computer Science, 2024, 51(4): 229-235.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!