计算机科学 ›› 2020, Vol. 47 ›› Issue (5): 190-197.doi: 10.11896/jsjkx.190700128

• 人工智能 • 上一篇    下一篇

基于特征可视化分析深度神经网络的内部表征

尚骏远, 杨乐涵, 何琨   

  1. 华中科技大学计算机学院 武汉430074
  • 收稿日期:2019-07-17 出版日期:2020-05-15 发布日期:2020-05-19
  • 通讯作者: 何琨(brooklet60@hust.edu.cn)
  • 作者简介:804593872@qq.com
  • 基金资助:
    国家自然科学基金(61772219);中央高校基本科研业务费专项资金(2019kfyXKJC021)

Analyzing Latent Representation of Deep Neural Networks Based on Feature Visualization

SHANG Jun-yuan, YANG Le-han, HE Kun   

  1. School of Computer Science and Technology,Huazhong University of Science and Technology,Wuhan 430074,China
  • Received:2019-07-17 Online:2020-05-15 Published:2020-05-19
  • About author:SHANG Jun-yuan,born in 1994,M.S.candidate.His main research interests include machine learning and deep learning.
    HE Kun,born in 1972,professor and Ph.D.supervisor.Her main research interest include machine learning,deep learning,and optimization algorithms.
  • Supported by:
    This work was supported by the National Natural Science Foundation of China(61772219) and Fundamental Research Funds for the Central Universities of Ministry of Education of China (2019kfyXKJC021).

摘要: 基于可视化的方式理解深度神经网络能直观地揭示其工作机理,即提供了黑盒模型做出决策的解释,在医疗诊断、自动驾驶等领域尤其重要。大部分现有工作均基于激活值最大化框架,即选定待观测神经元,通过优化输入值(如隐藏层特征图谱、原始图片),定性地将待观测神经元产生最大激活值时输入值的改变作为一种解释。然而,这种方法缺乏对深度神经网络深入的定量分析。文中提出了结构可视化和基于规则可视化两种可视化的元方法。结构可视化从浅至深依层可视化,发现浅层神经元具有一般性的全局特征,而深层神经元更针对细节特征。基于规则可视化包括交集与差集规则,可以帮助发现共享神经元与抑制神经元的存在,它们分别学习了不同类别的共有特征与抑制不相关的特征。实验针对代表性卷积网络VGG和残差网络ResNet在ImageNet和微软COCO数据集上进行了分析。通过量化分析发现,ResNet和VGG均有很高的稀疏性,通过屏蔽一些低激活值的“噪音”神经元,发现其对深度神经网络分类准确率均没有影响,甚至有一定程度的提高作用。文中通过可视化和量化分析深度神经网络的隐藏层特征,揭示其内部特征表达,从而为高性能深度神经网络的设计提供指导和借鉴。

关键词: 共用神经元, 内部表征, 深度神经网络, 特征可视化, 抑制神经元

Abstract: The working mechanism of deep neural networks can be intuitively uncovered by visualization technique.Visualizing deep neural networks can provide the interpretability on the decision made by the black box model,which is critically important in many fields,such as medical diagnosis and autopilot.Current existing works are mostly based on the activation maximization technique,which optimizes the input,the hidden feature map or the original image,in condition to the neuron that we want to observe.Qualitatively,the change in the input value can be taken as explanation when the neuron has reached nearly the maximum activation value.However,such method lacks the quantitative analysis of deep neural networks.To fill this gap,this paper proposes two meta methods,namely,structure visualization and rule-based visualization.Structure visualization works by visualizing from the shallow layers to the deep layers,and find that neurons in shallow layers learn global characteristics while neurons in deep layers learn more specific features.The rule-based visualization includes intersection and difference selection rule,and it is helpful to find the existence of shared neurons and inhibition neurons that learns the common features of different categories and inhibits unrelated features respectively.Experiments on two representative deep networks,namely the convolutional network VGG and the residual network ResNet,by using ImageNet and COCO datasets.Quantitative analysis shows that ResNet and VGG are highly sparse in representation.Thus,by removing some low activation-value “noisy” neurons,the networks can keep or even improve the classification accuracy.This paper discovers the Latent representation of deep neural networks by visualizing and quantitatively analyzing hidden features,thus providing guidance and reference for the design of high-performance deep neural networks.

Key words: Deep neural network, Feature visualization, Inhibition neuron, Internal representation, Shared neuron

中图分类号: 

  • TP83
[1]MCCULLOCH D E,PITTS W.A logical calculus of ideas immanent in nervous activity [J].Bulletin of Mathematical Biophysics,1943,5:115-133.
[2]RUMELHART D E,HINTON G E,WILLIAMS R J.Learning representations by back-propagating errors [J].Nature,1986,323:533-536.
[3]HINTON G E,SALAKHUTDINOV G E.Reducing the dimensionality of data with neural networks [J].Science,2006,313(5786):504-507.
[4]JIAO L C,YANG S Y,LIU F,et al.Seventy Years beyond Neural Networks:Retrospect and Prospect[J].Chinese Journal of Computers,2016,39(8):1697-1716.
[5]ZHOU F Y,JIN L P,DONG J.Review of Convolutional Neural Network[J].Chinese Journal of Computers,2017,40(6):1229-1251.
[6]CIREŞAN D C,MEIER U,GAMBARDELLA L M,et al.Deep,big,simple neural nets for handwritten digit recognition [J].Neural Computation,2010,22(12):3207-3220.
[7]FARABET C,COUPRIE C,NAJMAN L,et al.Learning hierarchical features for scene labeling [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2013,35(8):1915-1929.
[8]ZHAO R,OUYANG W,LI H S,et al.Saliency detection bymulti-context deep learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2015:1265-1274.
[9]MOHAMED A,DAHL G E,HINTON G E.Acoustic modeling using deep belief networks [J].IEEE Transactions on Audio,Speech,and Language Processing,2012,20(1):14-22.
[10]RUSSAKOVSKY O,DENG J,SU H,et al.ImageNet large scale visual recognition challenge [J].International Journal of Computer Vision,2015,115(3):211-252.
[11]KRIZHEVSKY A,SUTSKEVER I,HINTON G E.ImageNet classification with deep convolutional neural networks[C]//Advances in neural information processing systems.2012:1097-1105.
[12]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition [C]//3rd International Conference on Learning Representations (ICLR).2015.
[13]HE K M,ZHANG X,REN S Q,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[14]SRIVASTAVA N,HINTON G E,KRIZHEVSKY A,et al.Dropout:a simple way to prevent neural networks from overfitting [J].Journal of Machine Learning Research,2014,15(1):1929-1958.
[15]IOFFE S,SZEGEDY C.Batch normalization:Accelerating deep network training by reducing internal covariate shift [C]//Proceedings of the 32nd International Conference on Machine Learning (ICML).2015.
[16]YOSINSKI J,CLUNE J,NGUYEN A M,et al.Understanding neural networks through deep visualization [C]//Deep Learning Workshop,International Conference on Machine Learning (ICML).Lille,France,2015.
[17]ZEILER M D,FERGUS R.Visualizing and understanding convolutional networks[C]//European Conference on Computer Vision (ECCV).Springer International Publishing,2014:818-833.
[18]ERHAN D,BENGIO Y,COURVILLE A,et al.Visualizinghigher-layer features of a deep network:Technical Report 1341[R].University of Montreal,2009.
[19]ZEILER M D,TAYLOR G W,FERGUS R.Adaptive deconvolutional networks for mid and high level feature learning[C]//International Conference on Computer Vision (ICCV).2011.
[20]SIMONYAN K,VEDALDI A,ZISSERMAN A.Deep insideconvolutional networks:Visualising image classification models and saliency maps [C]//2nd International Conference on Learning Representations (ICLR).2014.
[21]ZINTGRAF L M,COHEN T S,ADEL T,et al.Visualizing deep neural network decisions:Prediction difference analysis [C]//5th International Conference on Learning Representations (ICLR).2017.
[22]ZHOU B L,KHOSLA A,LAPEDRIZA A,et al.Learning deep features for discriminative localization[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2016:2921-2929.
[23]MAATEN L V D,HINTON G E.Visualizing data using t-SNE [J].Journal of Machine Learning Research,2008(9):2579-2605.
[24]DOSOVITSKIY A,BROX T.Inverting visual representationswith convolutional networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2016:4829-4837.
[25]FONG R,VEDALDI A.Net2Vec:Quantifying and Explaining How Concepts Are Encoded by Filters in Deep Neural Networks[C]//2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2018.
[26]FONG R C,VEDALDI A.Interpretable Explanations of Black Boxes by Meaningful Perturbation[C]//Proceedings of the International Conference on Computer Vision (ICCV).2017:3449-3457.
[27]NGUYEN A,YOSINSKI J,CLUNE J.Understanding NeuralNetworks via Feature Visualization:A Survey[C]//Explainable AI:Interpreting,Explaining and Visualizing Deep Learning.Cham:Springer,2019:55-76.
[28]CHEN J B,SONG L,WAINWRIGHT M J,et al.Learning to explain:An information-theoretic perspective on model interpretation[C]//Proceedings of the 35nd International Conference on Machine Learning (ICML).2018.
[29]SAMEK W,BINDER A,MONTAVON G,et al.Evaluating the visualization of what a deep neural network has learned [J].IEEE Transactions on Neural Networks and Learning Systems,2016,28(11):2660-2673.
[30]LIN T Y,MAIRE M,BELONGIE S J,et al.Microsoft COCO:Common objects in context[C]//European Conference on Computer Vision (ECCV).Cham:Springer,2014:740-755.
[31]ABADI M,AGARWAL A,BARHAM P,et al.Tensorflow:Large-scale machine learning on heterogeneous distributed systems [J].arXiv:1603.04467,2016.
[32]LECUN Y,BOTTOU L,BENGIO Y,et al.Gradient-basedlearning applied to document recognition[J].Proceedings of the IEEE,1998,86(11):2278-2324.
[1] 高捷, 刘沙, 黄则强, 郑天宇, 刘鑫, 漆锋滨.
基于国产众核处理器的深度神经网络算子加速库优化
Deep Neural Network Operator Acceleration Library Optimization Based on Domestic Many-core Processor
计算机科学, 2022, 49(5): 355-362. https://doi.org/10.11896/jsjkx.210500226
[2] 焦翔, 魏祥麟, 薛羽, 王超, 段强.
基于深度学习的自动调制识别研究
Automatic Modulation Recognition Based on Deep Learning
计算机科学, 2022, 49(5): 266-278. https://doi.org/10.11896/jsjkx.211000085
[3] 范红杰, 李雪冬, 叶松涛.
面向电子病历语义解析的疾病辅助诊断方法
Aided Disease Diagnosis Method for EMR Semantic Analysis
计算机科学, 2022, 49(1): 153-158. https://doi.org/10.11896/jsjkx.201100125
[4] 周欣, 刘硕迪, 潘薇, 陈媛媛.
自然交通场景中的车辆颜色识别
Vehicle Color Recognition in Natural Traffic Scene
计算机科学, 2021, 48(6A): 15-20. https://doi.org/10.11896/jsjkx.200800078
[5] 刘东, 王叶斐, 林建平, 马海川, 杨闰宇.
端到端优化的图像压缩技术进展
Advances in End-to-End Optimized Image Compression Technologies
计算机科学, 2021, 48(3): 1-8. https://doi.org/10.11896/jsjkx.201100134
[6] 马琳, 王云霄, 赵丽娜, 韩兴旺, 倪金超, 张婕.
基于多模型判别的网络入侵检测系统
Network Intrusion Detection System Based on Multi-model Ensemble
计算机科学, 2021, 48(11A): 592-596. https://doi.org/10.11896/jsjkx.201100170
[7] 潘雨, 邹军华, 王帅辉, 胡谷雨, 潘志松.
基于网络表示学习的深度社团发现方法
Deep Community Detection Algorithm Based on Network Representation Learning
计算机科学, 2021, 48(11A): 198-203. https://doi.org/10.11896/jsjkx.210200113
[8] 刘天星, 李伟, 许铮, 张立华, 戚骁亚, 甘中学.
面向高维连续行动空间的蒙特卡罗树搜索算法
Monte Carlo Tree Search for High-dimensional Continuous Control Space
计算机科学, 2021, 48(10): 30-36. https://doi.org/10.11896/jsjkx.201000129
[9] 张艳梅, 楼胤成.
基于深度神经网络的庞氏骗局合约检测方法
Deep Neural Network Based Ponzi Scheme Contract Detection Method
计算机科学, 2021, 48(1): 273-279. https://doi.org/10.11896/jsjkx.191100020
[10] 吕泽宇李纪旋陈如剑陈东明.
电商平台用户再购物行为的预测研究
Research on Prediction of Re-shopping Behavior of E-commerce Customers
计算机科学, 2020, 47(6A): 424-428. https://doi.org/10.11896/JsJkx.190900018
[11] 丁子昂, 乐曹伟, 吴玲玲, 付明磊.
基于CEEMD-Pearson和深度LSTM混合模型的PM2.5浓度预测方法
PM2.5 Concentration Prediction Method Based on CEEMD-Pearson and Deep LSTM Hybrid Model
计算机科学, 2020, 47(6A): 444-449. https://doi.org/10.11896/JsJkx.190700158
[12] 李天培, 陈黎.
基于双注意力编码-解码器架构的视网膜血管分割
Retinal Vessel Segmentation Based on Dual Attention and Encoder-decoder Structure
计算机科学, 2020, 47(5): 166-171. https://doi.org/10.11896/jsjkx.190400062
[13] 唐国强,高大启,阮彤,叶琪,王祺.
融入语言模型和注意力机制的临床电子病历命名实体识别
Clinical Electronic Medical Record Named Entity Recognition Incorporating Language Model and Attention Mechanism
计算机科学, 2020, 47(3): 211-216. https://doi.org/10.11896/jsjkx.190200259
[14] 樊玮, 刘挺, 黄睿, 郭青, 张宝.
卷积神经网络低层特征辅助的图像实例分割方法
Low-level CNN Feature Aided Image Instance Segmentation
计算机科学, 2020, 47(11): 186-191. https://doi.org/10.11896/jsjkx.191200063
[15] 孔繁钰, 周愉峰, 陈纲.
基于时空特征挖掘的交通流量预测方法
Traffic Flow Prediction Method Based on Spatio-Temporal Feature Mining
计算机科学, 2019, 46(7): 322-326. https://doi.org/10.11896/j.issn.1002-137X.2019.07.049
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!