计算机科学 ›› 2019, Vol. 46 ›› Issue (9): 1-14.doi: 10.11896/j.issn.1002-137X.2019.09.001

• 综述 •    下一篇

深度神经网络压缩综述

李青华1,2, 李翠平1,2, 张静1,2, 陈红1,2, 王绍卿1,2,3   

  1. (中国人民大学数据工程与知识工程教育部重点实验室 北京100872)1;
    (中国人民大学信息学院 北京100872)2;
    (山东理工大学计算机科学与技术学院 山东 淄博255091)3
  • 收稿日期:2018-12-11 出版日期:2019-09-15 发布日期:2019-09-02
  • 通讯作者: 李翠平(1971-),女,博士,教授,主要研究方向为推荐系统,E-mail:licuiping@ruc.edu.cn
  • 作者简介:李青华(1991-),男,博士生,主要研究方向为深度学习、模型优化,E-mail:qinghuali@ruc.edu.cn;张 静(1984-),女,博士,讲师,主要研究方向为社交网络分析;陈 红(1965-),女,博士,教授,主要研究方向为高性能数据库;王绍卿(1981-),男,博士,主要研究方向为推荐系统。

Survey of Compressed Deep Neural Network

LI Qing-hua1,2, LI Cui-ping1,2, ZHANG Jing1,2, CHEN Hong1,2, WANG Shao-qing1,2,3   

  1. (Key Laboratory of Data Engineering and Knowledge Engineering (Renmin University of China),Ministry of Education,Beijing 100872,China)1;
    (School of Information,Renmin University of China,Beijing 100872,China)2;
    (School of Computer Science and Technology,Shandong University of Technology,Zibo,Shandong 255091,China)3
  • Received:2018-12-11 Online:2019-09-15 Published:2019-09-02

摘要: 近年来深度神经网络在目标识别、图像分类等领域取得了重大突破,然而训练和测试这些大型深度神经网络存在几点限制:1)训练和测试这些深度神经网络需要进行大量的计算(训练和测试将消耗大量的时间),需要高性能的计算设备(例如GPU)来加快训练和测试速度;2)深度神经网络模型通常包含大量的参数,需要大容量的高速内存来存储模型。上述限制阻碍了神经网络等技术的广泛应用(现阶段神经网络的训练和测试通常是在高性能服务器或者集群下面运行,在一些对实时性要求较高的移动设备(如手机)上的应用受到限制)。文中对近年来的压缩神经网络算法进行了综述,系统地介绍了深度神经网络压缩的主要方法,如裁剪方法、稀疏正则化方法、分解方法、共享参数方法、掩码加速方法、离散余弦变换方法,最后对未来深度神经网络压缩的研究方向进行了展望。

关键词: 模型压缩, 深度学习, 神经网络

Abstract: In recent years,deep neural networks have achieved significant breakthroughs in target recognition,image classification,etc.However,training and testing for these deep neural network have several limitations.Firstly,training and testing for these deep neural networks require a lot of computation (training and testing consume a lot of time),which requires high-performance computing devices (such as GPUs) to improve the training and testing speed,and shorten training and testing time.Secondly,the deep neural network model usually contains a large number of parameters that require high-capacity,high-speed memory to store.These limitations hinder the widespread use of deep neural networks.At present,training and testing of deep neural networks usually run under high-performance servers or clusters.In some mobile devices with high real-time requirements,such as mobile phones,applications are limited.This paper reviewed the progress of compression deep neural network algorithm in recent years,and introduced the main me-thods of compressing neural network,such as cropping method,sparse regularization method,decomposition method,shared parameter method,mask acceleration method and discrete cosine transform method.Finally,the future research direction of compressed deep neural network was prospected.

Key words: Deep learning, Model compression, Neural network

中图分类号: 

  • TP39
[1]YANIV T M,YANG M,RANZATO M A,et al.Deepface:Clo-sing the gap to human-level performance in face verification[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2014.
[2]SUN Y,WANG X G,TANG X O,et al.Deep learning face representation from predicting10000 classes[C]//IEEE Confe-rence on Computer Vision and Pattern Recognition (CVPR).2014.
[3]GIRSHICK R B.Fast R-CNN[C]//International Conference on Computer Vision(ICCV).2015:1440-1448.
[4]GIRSHICK R B,DONAHUE J,DARRELL T,et al.Rich feature hierarchies for accurate object detection and semantic segmentation[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2014:580-587.
[5]REN S,HE K,GIRSHICK R B,et al.Faster R-CNN:towards real-time object detection with region proposal networks[J].IEEE Transactions on Pattern Analysis & Machine Intelligence,2015,39(1):1137-1149.
[6]KRIZHEVSKY A,SUTSKEVER I,HINTON G E.Imagenet classification with deep convolutional neural networks[J].Neural Information Processing Systems (NIPS),2012,25(2):1106-1114.
[7]ZEILER M D,FERGUSR.Visualizing and understanding convo-lutional networks[C]//European Conference on Computer Vision (ECCV).2014:818-833.
[8]SZEGEDY C,LIU W,JIA Y,et al.Going deeper with convolutions[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2015:1-9.
[9]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[C]//International Conference on Learning Representations (ICLR).2015.
[10]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[C]//International Conference on Learning Representations (ICLR).2015.
[11]CHEN L C,PAPANDREOU G,KOKKINOS I,et al.Semantic image segmentation with deep convolutional nets and fully connected crfs[C]//International Conference on Learning Representations (ICLR).2015.
[12]GONG Y,WANG L,GUO R,et al.Multiscale orderless pooling of deep convolutional activation features[C]//European Conference on Computer Vision (ECCV).2014.
[13]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[J].arXiv:1409.1556.
[14]DENIL M,SHAKIBI B,DINH L,et al.Predicting parameters in deep learning[C]//Neural Information Processing Systems(NIPS).2013.
[15]HORNIK K,STINCHCOMBE M,WHITE H.Multilayer feedforward networks are universal approximators [J].Neural Networks,1989,2(5):359-366.
[16]GARDNER M W,DORLING S R.Artificial neural networks (the multilayer perceptron )-a review of applications in the atmospheric sciences[J].Atmospheric Environment,1998,32(14/15):2627-2636.
[17]ZHOU F Y,JIN L P,DONG J.Review of Convolutional Neural Network[J].Chinese Journal of Computers,2017,40(7):1229-1251.(in Chinese)周飞燕,金林鹏,董军.卷积神经网络研究综述[J].计算机学报,2017,40(7):1229-1251.
[18]RUMELHART D E,HINTON G,WILLIAMS R J.Learning representations by back-propagating errors[J].Nature,1986,323(6088):533-536.
[19]LECUN Y,BOTTOU L,BENGIO Y S,et al.Gradient-Based Learning Applied to Document Recognition[J].Proceedings of the IEEE,1998,86(11):2278-2324.
[20]ZHANG Q H,WAN C X.Review of Convolutional Neural Network[J].Journal of Zhongyuan University of Technology,2017,28(3):1671-6906.(in Chinese)张庆辉,万晨霞.卷积神经网络综述[J].中原工学院学报,2017,28(3):1671-6906.
[21]SONG H,JEFF P,JOHN T,et al.Learning both weights and connections for efficient neural network[C]//Neural Information Processing Systems(NIPS).2015:1135-1143.
[22]GUO Y W,YAO A B,CHEN Y R.Dynamic Network Surgery for Efficient DNNs[C]//Neural Information Processing Systems(NIPS).2016.
[23]LI H,KADAV A,DURDANOVIC I,et al.Pruning Filters for Efficient Convents[C]//International Conference on Learning Representations (ICLR).2017.
[24]PAVIO M,STEPHEN T,TERO K,et al.Pruning Convolutional Neural Networks for Resource Efficient Inference[C]//International Conference on Learning Representations (ICLR).2017.
[25]HARVEY L,ARNOLD B,ZIPURSKY L S,et al.Molecular Cell Biology:Neurotransmitters,Synapses,and Impulse Transmission[M].New York:W.H.Freeman,2000.
[26]JOSE M A,MATHIEU S.Learning the Number of Neurons in Deep Networks[C]//Neural Information Processing Systems(NIPS).2016.
[27]WEI W,WU C P,WANG Y D,et al.Learning Structured Sparsity in Deep Neural Networks[C]//Neural Information Proces-sing Systems(NIPS).2016.
[28]WANG S J,CAI H R,JEFF B,et al.Training Compressed Fully-Connected Networks with a Density-diversity Penalty[C]//International Conference on Learning Representations (ICLR).2017.
[29]YUAN M,LIN Y.Model selection and estimation in regression with grouped variables[J].Journal of The Royal Statistical So-ciety Series B-statistical Methodology,2006,68(1):49-67.
[30]KIM S,XING E P.Tree-guided group lasso for multi-task regression with structured sparsity[C]//Proceedings of the 27th International Conference on Machine Learning.2010.
[31]YUAN M,LIN Y.Model selection and estimation in regression with grouped variables[J].Journal of the Royal Statistical Society,Series B,2006,68(1):49-67.
[32]PETER L B.For valid generalization the size of the weights is more important than the size of the network[C]//Neural Information Processing Systems(NIPS).1996.
[33]KROGH A,JOHN A H.A simple weight decay can improve generalization[C]//Neural Information Processing Systems(NIPS).1992.
[34]THEODORIDIS S.Machine Learning A Bayesian and Optimization Perspective[M].Academic Press,2015.
[35]COLLINS M D,KOHLI P.Memory Bounded Deep Convolutional Networks[J].arXiv.1412.1442v1.
[36]SIMON N,FRIEDMAN J,HASTIE T,et al.A sparse-group lasso[J].Journal of Computational and Graphical Statistics,2013,22(2):231-245.
[37]PARIKH N,BOYD S.Proximal algorithms[J].Foundations and Trends in Optimization,2014,1(3):123-231.
[38]MAX J,ANDREA V,ANDREW Z.Speeding up Convolutional Neural Networks with Low Rank Expansions[C]//British Machine Vision Conference.2014.
[39]VADIM L,YAROSLAV G,MAKSIM R,et al.Speeding-up Convolutional Neural Networks Using Fine-tuned Cp-decomposition[C]//International Conference on Learning Representations (ICLR).2015.
[40]JONGHOON J,AYSEGUL D,EUGENIO C.Flattened Convolutional Neural Networks for Feedforward Acceleration[C]//International Conference on Learning Representations (ICLR).2015.
[41]KIM Y D,EUNHYEOK P,YOO S J,et al.Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications[C]//International Conference on Learning Representations (ICLR).2016.
[42]RIGAMONTI R,SIRONI A,LEPETIT V,et al.Learning separable filters[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2013:2754-2761.
[43]KOLDA T G,BADER B W.Tensor decompositions and applications[J].Siam Review,2009,51(3):455-500.
[44]EMILY D,WOJCIECH Z,JOAN B,et al.Exploiting linear structure within convolutional networks for efficient evaluation[J].arXiv:1404.0736.
[45]SORBER L,VAN B M,DELATHAUWER L.Tensorlab v2.0[EB/OL].http://tensorlab.net.
[46]TOMASI G,BRO R.A comparison of algorithms for fitting the parafac model[J].Computational Statistics & Data Analysis,2006,50(7):1700-1734.
[47]TUCKER L R.Some mathematical notes on three-mode factor analysis[J].Psychometrika,1966,31(3):279-311.
[48]GONG Y C,LIU L,YANG M,et al.Compressing Deep Convolutional Networks Using Vector Quantization[C]//InternationalConference on Learning Representations (ICLR).2015.
[49]CHEN W L,JAMES T W,STEPHEN T,et al.Compressing Neural Networks with the Hashing Trick[C]//International Conference on Machine Learning.2015:2285-2294.
[50]SONG H,MAO H Z,WILLIAM J.Deep Compression:Com-pressing Deep Neural Networks with Pruning,Trained Quantization and Huffman Coding[C]//International Conference on Learning Representations (ICLR).2016.
[51]CHOI Y J,MOSTAFA E K,JUNWON L.Towards the Limit of Network Quantization[C]//International Conference onLear-ning Representations (ICLR).2017.
[52]ZHOU A J,YAO A B,GUO Y W,et al.Incremental Network Quantization: Towards Lossless Cnns with Low-precision Weights[C]//International Conference on Learning Representations (ICLR).2017.
[53]JAN V L.On the construction of huffman trees[C]//International colloquium on automata,languages and programming(ICALP).1976:382-410.
[54]MICHAEL F,AIJAN L,DMITRY V,et al.PerforatedCNNs:Acceleration through Elimination of Redundant Convolutions[C]//Neural Information Processing Systems(NIPS).2016.
[55]LIN S H,JI R R,CHEN C,et al.ESPACE:Accelerating Convolutional Neural Networks via Eliminating Spatial & Channel Redundancy[C]//AAAI Conference on Artificial Intelligence(AAAI).2017.
[56]STELIOS S D,SASA M,HENRY C H,et al.Managing performance vs.accuracy trade-offs with loop perforation[C]//ESEC.2011:124-134.
[57]MISAILOVIC S,SIDIROGLOU S,HOFFMANN H,et al.Quality of service profiling[C]//International conference on software engineering(ICSE).2010:25-34.
[58]MISAILOVIC S,ROY D M,RINARD M C.Probabilistically ac-curate program transformations[C]//Static Analysis Sympo-sium.2011:316-333.
[59]CHEN W L,JAMES T,WILSON S T.Compressing Convolu-tional Neural Networks[J].arXiv:1506.04449v1.
[60]WANG Y H,XU C,YOU S,et al.CNNpack:Packing Convolutional Neural Networks in the Frequency Domain[C]//Neural Information Processing Systems(NIPS).2016.
[61]RAO K R,YIP P.Discrete cosine transform:algorithms,advantages,applications[M].Academic Press Professional Inc,2014.
[1] 饶志双, 贾真, 张凡, 李天瑞.
基于Key-Value关联记忆网络的知识图谱问答方法
Key-Value Relational Memory Networks for Question Answering over Knowledge Graph
计算机科学, 2022, 49(9): 202-207. https://doi.org/10.11896/jsjkx.220300277
[2] 宁晗阳, 马苗, 杨波, 刘士昌.
密码学智能化研究进展与分析
Research Progress and Analysis on Intelligent Cryptology
计算机科学, 2022, 49(9): 288-296. https://doi.org/10.11896/jsjkx.220300053
[3] 汤凌韬, 王迪, 张鲁飞, 刘盛云.
基于安全多方计算和差分隐私的联邦学习方案
Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy
计算机科学, 2022, 49(9): 297-305. https://doi.org/10.11896/jsjkx.210800108
[4] 周芳泉, 成卫青.
基于全局增强图神经网络的序列推荐
Sequence Recommendation Based on Global Enhanced Graph Neural Network
计算机科学, 2022, 49(9): 55-63. https://doi.org/10.11896/jsjkx.210700085
[5] 周乐员, 张剑华, 袁甜甜, 陈胜勇.
多层注意力机制融合的序列到序列中国连续手语识别和翻译
Sequence-to-Sequence Chinese Continuous Sign Language Recognition and Translation with Multi- layer Attention Mechanism Fusion
计算机科学, 2022, 49(9): 155-161. https://doi.org/10.11896/jsjkx.210800026
[6] 徐涌鑫, 赵俊峰, 王亚沙, 谢冰, 杨恺.
时序知识图谱表示学习
Temporal Knowledge Graph Representation Learning
计算机科学, 2022, 49(9): 162-171. https://doi.org/10.11896/jsjkx.220500204
[7] 李宗民, 张玉鹏, 刘玉杰, 李华.
基于可变形图卷积的点云表征学习
Deformable Graph Convolutional Networks Based Point Cloud Representation Learning
计算机科学, 2022, 49(8): 273-278. https://doi.org/10.11896/jsjkx.210900023
[8] 王剑, 彭雨琦, 赵宇斐, 杨健.
基于深度学习的社交网络舆情信息抽取方法综述
Survey of Social Network Public Opinion Information Extraction Based on Deep Learning
计算机科学, 2022, 49(8): 279-293. https://doi.org/10.11896/jsjkx.220300099
[9] 郝志荣, 陈龙, 黄嘉成.
面向文本分类的类别区分式通用对抗攻击方法
Class Discriminative Universal Adversarial Attack for Text Classification
计算机科学, 2022, 49(8): 323-329. https://doi.org/10.11896/jsjkx.220200077
[10] 姜梦函, 李邵梅, 郑洪浩, 张建朋.
基于改进位置编码的谣言检测模型
Rumor Detection Model Based on Improved Position Embedding
计算机科学, 2022, 49(8): 330-335. https://doi.org/10.11896/jsjkx.210600046
[11] 王润安, 邹兆年.
基于物理操作级模型的查询执行时间预测方法
Query Performance Prediction Based on Physical Operation-level Models
计算机科学, 2022, 49(8): 49-55. https://doi.org/10.11896/jsjkx.210700074
[12] 陈泳全, 姜瑛.
基于卷积神经网络的APP用户行为分析方法
Analysis Method of APP User Behavior Based on Convolutional Neural Network
计算机科学, 2022, 49(8): 78-85. https://doi.org/10.11896/jsjkx.210700121
[13] 朱承璋, 黄嘉儿, 肖亚龙, 王晗, 邹北骥.
基于注意力机制的医学影像深度哈希检索算法
Deep Hash Retrieval Algorithm for Medical Images Based on Attention Mechanism
计算机科学, 2022, 49(8): 113-119. https://doi.org/10.11896/jsjkx.210700153
[14] 孙奇, 吉根林, 张杰.
基于非局部注意力生成对抗网络的视频异常事件检测方法
Non-local Attention Based Generative Adversarial Network for Video Abnormal Event Detection
计算机科学, 2022, 49(8): 172-177. https://doi.org/10.11896/jsjkx.210600061
[15] 檀莹莹, 王俊丽, 张超波.
基于图卷积神经网络的文本分类方法研究综述
Review of Text Classification Methods Based on Graph Convolutional Network
计算机科学, 2022, 49(8): 205-216. https://doi.org/10.11896/jsjkx.210800064
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!