Computer Science ›› 2019, Vol. 46 ›› Issue (9): 1-14.doi: 10.11896/j.issn.1002-137X.2019.09.001

• Surverys •     Next Articles

Survey of Compressed Deep Neural Network

LI Qing-hua1,2, LI Cui-ping1,2, ZHANG Jing1,2, CHEN Hong1,2, WANG Shao-qing1,2,3   

  1. (Key Laboratory of Data Engineering and Knowledge Engineering (Renmin University of China),Ministry of Education,Beijing 100872,China)1;
    (School of Information,Renmin University of China,Beijing 100872,China)2;
    (School of Computer Science and Technology,Shandong University of Technology,Zibo,Shandong 255091,China)3
  • Received:2018-12-11 Online:2019-09-15 Published:2019-09-02

Abstract: In recent years,deep neural networks have achieved significant breakthroughs in target recognition,image classification,etc.However,training and testing for these deep neural network have several limitations.Firstly,training and testing for these deep neural networks require a lot of computation (training and testing consume a lot of time),which requires high-performance computing devices (such as GPUs) to improve the training and testing speed,and shorten training and testing time.Secondly,the deep neural network model usually contains a large number of parameters that require high-capacity,high-speed memory to store.These limitations hinder the widespread use of deep neural networks.At present,training and testing of deep neural networks usually run under high-performance servers or clusters.In some mobile devices with high real-time requirements,such as mobile phones,applications are limited.This paper reviewed the progress of compression deep neural network algorithm in recent years,and introduced the main me-thods of compressing neural network,such as cropping method,sparse regularization method,decomposition method,shared parameter method,mask acceleration method and discrete cosine transform method.Finally,the future research direction of compressed deep neural network was prospected.

Key words: Deep learning, Model compression, Neural network

CLC Number: 

  • TP39
[1]YANIV T M,YANG M,RANZATO M A,et al.Deepface:Clo-sing the gap to human-level performance in face verification[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2014.
[2]SUN Y,WANG X G,TANG X O,et al.Deep learning face representation from predicting10000 classes[C]//IEEE Confe-rence on Computer Vision and Pattern Recognition (CVPR).2014.
[3]GIRSHICK R B.Fast R-CNN[C]//International Conference on Computer Vision(ICCV).2015:1440-1448.
[4]GIRSHICK R B,DONAHUE J,DARRELL T,et al.Rich feature hierarchies for accurate object detection and semantic segmentation[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2014:580-587.
[5]REN S,HE K,GIRSHICK R B,et al.Faster R-CNN:towards real-time object detection with region proposal networks[J].IEEE Transactions on Pattern Analysis & Machine Intelligence,2015,39(1):1137-1149.
[6]KRIZHEVSKY A,SUTSKEVER I,HINTON G E.Imagenet classification with deep convolutional neural networks[J].Neural Information Processing Systems (NIPS),2012,25(2):1106-1114.
[7]ZEILER M D,FERGUSR.Visualizing and understanding convo-lutional networks[C]//European Conference on Computer Vision (ECCV).2014:818-833.
[8]SZEGEDY C,LIU W,JIA Y,et al.Going deeper with convolutions[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2015:1-9.
[9]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[C]//International Conference on Learning Representations (ICLR).2015.
[10]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[C]//International Conference on Learning Representations (ICLR).2015.
[11]CHEN L C,PAPANDREOU G,KOKKINOS I,et al.Semantic image segmentation with deep convolutional nets and fully connected crfs[C]//International Conference on Learning Representations (ICLR).2015.
[12]GONG Y,WANG L,GUO R,et al.Multiscale orderless pooling of deep convolutional activation features[C]//European Conference on Computer Vision (ECCV).2014.
[13]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[J].arXiv:1409.1556.
[14]DENIL M,SHAKIBI B,DINH L,et al.Predicting parameters in deep learning[C]//Neural Information Processing Systems(NIPS).2013.
[15]HORNIK K,STINCHCOMBE M,WHITE H.Multilayer feedforward networks are universal approximators [J].Neural Networks,1989,2(5):359-366.
[16]GARDNER M W,DORLING S R.Artificial neural networks (the multilayer perceptron )-a review of applications in the atmospheric sciences[J].Atmospheric Environment,1998,32(14/15):2627-2636.
[17]ZHOU F Y,JIN L P,DONG J.Review of Convolutional Neural Network[J].Chinese Journal of Computers,2017,40(7):1229-1251.(in Chinese)周飞燕,金林鹏,董军.卷积神经网络研究综述[J].计算机学报,2017,40(7):1229-1251.
[18]RUMELHART D E,HINTON G,WILLIAMS R J.Learning representations by back-propagating errors[J].Nature,1986,323(6088):533-536.
[19]LECUN Y,BOTTOU L,BENGIO Y S,et al.Gradient-Based Learning Applied to Document Recognition[J].Proceedings of the IEEE,1998,86(11):2278-2324.
[20]ZHANG Q H,WAN C X.Review of Convolutional Neural Network[J].Journal of Zhongyuan University of Technology,2017,28(3):1671-6906.(in Chinese)张庆辉,万晨霞.卷积神经网络综述[J].中原工学院学报,2017,28(3):1671-6906.
[21]SONG H,JEFF P,JOHN T,et al.Learning both weights and connections for efficient neural network[C]//Neural Information Processing Systems(NIPS).2015:1135-1143.
[22]GUO Y W,YAO A B,CHEN Y R.Dynamic Network Surgery for Efficient DNNs[C]//Neural Information Processing Systems(NIPS).2016.
[23]LI H,KADAV A,DURDANOVIC I,et al.Pruning Filters for Efficient Convents[C]//International Conference on Learning Representations (ICLR).2017.
[24]PAVIO M,STEPHEN T,TERO K,et al.Pruning Convolutional Neural Networks for Resource Efficient Inference[C]//International Conference on Learning Representations (ICLR).2017.
[25]HARVEY L,ARNOLD B,ZIPURSKY L S,et al.Molecular Cell Biology:Neurotransmitters,Synapses,and Impulse Transmission[M].New York:W.H.Freeman,2000.
[26]JOSE M A,MATHIEU S.Learning the Number of Neurons in Deep Networks[C]//Neural Information Processing Systems(NIPS).2016.
[27]WEI W,WU C P,WANG Y D,et al.Learning Structured Sparsity in Deep Neural Networks[C]//Neural Information Proces-sing Systems(NIPS).2016.
[28]WANG S J,CAI H R,JEFF B,et al.Training Compressed Fully-Connected Networks with a Density-diversity Penalty[C]//International Conference on Learning Representations (ICLR).2017.
[29]YUAN M,LIN Y.Model selection and estimation in regression with grouped variables[J].Journal of The Royal Statistical So-ciety Series B-statistical Methodology,2006,68(1):49-67.
[30]KIM S,XING E P.Tree-guided group lasso for multi-task regression with structured sparsity[C]//Proceedings of the 27th International Conference on Machine Learning.2010.
[31]YUAN M,LIN Y.Model selection and estimation in regression with grouped variables[J].Journal of the Royal Statistical Society,Series B,2006,68(1):49-67.
[32]PETER L B.For valid generalization the size of the weights is more important than the size of the network[C]//Neural Information Processing Systems(NIPS).1996.
[33]KROGH A,JOHN A H.A simple weight decay can improve generalization[C]//Neural Information Processing Systems(NIPS).1992.
[34]THEODORIDIS S.Machine Learning A Bayesian and Optimization Perspective[M].Academic Press,2015.
[35]COLLINS M D,KOHLI P.Memory Bounded Deep Convolutional Networks[J].arXiv.1412.1442v1.
[36]SIMON N,FRIEDMAN J,HASTIE T,et al.A sparse-group lasso[J].Journal of Computational and Graphical Statistics,2013,22(2):231-245.
[37]PARIKH N,BOYD S.Proximal algorithms[J].Foundations and Trends in Optimization,2014,1(3):123-231.
[38]MAX J,ANDREA V,ANDREW Z.Speeding up Convolutional Neural Networks with Low Rank Expansions[C]//British Machine Vision Conference.2014.
[39]VADIM L,YAROSLAV G,MAKSIM R,et al.Speeding-up Convolutional Neural Networks Using Fine-tuned Cp-decomposition[C]//International Conference on Learning Representations (ICLR).2015.
[40]JONGHOON J,AYSEGUL D,EUGENIO C.Flattened Convolutional Neural Networks for Feedforward Acceleration[C]//International Conference on Learning Representations (ICLR).2015.
[41]KIM Y D,EUNHYEOK P,YOO S J,et al.Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications[C]//International Conference on Learning Representations (ICLR).2016.
[42]RIGAMONTI R,SIRONI A,LEPETIT V,et al.Learning separable filters[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2013:2754-2761.
[43]KOLDA T G,BADER B W.Tensor decompositions and applications[J].Siam Review,2009,51(3):455-500.
[44]EMILY D,WOJCIECH Z,JOAN B,et al.Exploiting linear structure within convolutional networks for efficient evaluation[J].arXiv:1404.0736.
[45]SORBER L,VAN B M,DELATHAUWER L.Tensorlab v2.0[EB/OL].http://tensorlab.net.
[46]TOMASI G,BRO R.A comparison of algorithms for fitting the parafac model[J].Computational Statistics & Data Analysis,2006,50(7):1700-1734.
[47]TUCKER L R.Some mathematical notes on three-mode factor analysis[J].Psychometrika,1966,31(3):279-311.
[48]GONG Y C,LIU L,YANG M,et al.Compressing Deep Convolutional Networks Using Vector Quantization[C]//InternationalConference on Learning Representations (ICLR).2015.
[49]CHEN W L,JAMES T W,STEPHEN T,et al.Compressing Neural Networks with the Hashing Trick[C]//International Conference on Machine Learning.2015:2285-2294.
[50]SONG H,MAO H Z,WILLIAM J.Deep Compression:Com-pressing Deep Neural Networks with Pruning,Trained Quantization and Huffman Coding[C]//International Conference on Learning Representations (ICLR).2016.
[51]CHOI Y J,MOSTAFA E K,JUNWON L.Towards the Limit of Network Quantization[C]//International Conference onLear-ning Representations (ICLR).2017.
[52]ZHOU A J,YAO A B,GUO Y W,et al.Incremental Network Quantization: Towards Lossless Cnns with Low-precision Weights[C]//International Conference on Learning Representations (ICLR).2017.
[53]JAN V L.On the construction of huffman trees[C]//International colloquium on automata,languages and programming(ICALP).1976:382-410.
[54]MICHAEL F,AIJAN L,DMITRY V,et al.PerforatedCNNs:Acceleration through Elimination of Redundant Convolutions[C]//Neural Information Processing Systems(NIPS).2016.
[55]LIN S H,JI R R,CHEN C,et al.ESPACE:Accelerating Convolutional Neural Networks via Eliminating Spatial & Channel Redundancy[C]//AAAI Conference on Artificial Intelligence(AAAI).2017.
[56]STELIOS S D,SASA M,HENRY C H,et al.Managing performance vs.accuracy trade-offs with loop perforation[C]//ESEC.2011:124-134.
[57]MISAILOVIC S,SIDIROGLOU S,HOFFMANN H,et al.Quality of service profiling[C]//International conference on software engineering(ICSE).2010:25-34.
[58]MISAILOVIC S,ROY D M,RINARD M C.Probabilistically ac-curate program transformations[C]//Static Analysis Sympo-sium.2011:316-333.
[59]CHEN W L,JAMES T,WILSON S T.Compressing Convolu-tional Neural Networks[J].arXiv:1506.04449v1.
[60]WANG Y H,XU C,YOU S,et al.CNNpack:Packing Convolutional Neural Networks in the Frequency Domain[C]//Neural Information Processing Systems(NIPS).2016.
[61]RAO K R,YIP P.Discrete cosine transform:algorithms,advantages,applications[M].Academic Press Professional Inc,2014.
[1] ZHOU Fang-quan, CHENG Wei-qing. Sequence Recommendation Based on Global Enhanced Graph Neural Network [J]. Computer Science, 2022, 49(9): 55-63.
[2] ZHOU Le-yuan, ZHANG Jian-hua, YUAN Tian-tian, CHEN Sheng-yong. Sequence-to-Sequence Chinese Continuous Sign Language Recognition and Translation with Multi- layer Attention Mechanism Fusion [J]. Computer Science, 2022, 49(9): 155-161.
[3] XU Yong-xin, ZHAO Jun-feng, WANG Ya-sha, XIE Bing, YANG Kai. Temporal Knowledge Graph Representation Learning [J]. Computer Science, 2022, 49(9): 162-171.
[4] RAO Zhi-shuang, JIA Zhen, ZHANG Fan, LI Tian-rui. Key-Value Relational Memory Networks for Question Answering over Knowledge Graph [J]. Computer Science, 2022, 49(9): 202-207.
[5] NING Han-yang, MA Miao, YANG Bo, LIU Shi-chang. Research Progress and Analysis on Intelligent Cryptology [J]. Computer Science, 2022, 49(9): 288-296.
[6] TANG Ling-tao, WANG Di, ZHANG Lu-fei, LIU Sheng-yun. Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy [J]. Computer Science, 2022, 49(9): 297-305.
[7] WANG Jian, PENG Yu-qi, ZHAO Yu-fei, YANG Jian. Survey of Social Network Public Opinion Information Extraction Based on Deep Learning [J]. Computer Science, 2022, 49(8): 279-293.
[8] HAO Zhi-rong, CHEN Long, HUANG Jia-cheng. Class Discriminative Universal Adversarial Attack for Text Classification [J]. Computer Science, 2022, 49(8): 323-329.
[9] JIANG Meng-han, LI Shao-mei, ZHENG Hong-hao, ZHANG Jian-peng. Rumor Detection Model Based on Improved Position Embedding [J]. Computer Science, 2022, 49(8): 330-335.
[10] WANG Run-an, ZOU Zhao-nian. Query Performance Prediction Based on Physical Operation-level Models [J]. Computer Science, 2022, 49(8): 49-55.
[11] CHEN Yong-quan, JIANG Ying. Analysis Method of APP User Behavior Based on Convolutional Neural Network [J]. Computer Science, 2022, 49(8): 78-85.
[12] ZHU Cheng-zhang, HUANG Jia-er, XIAO Ya-long, WANG Han, ZOU Bei-ji. Deep Hash Retrieval Algorithm for Medical Images Based on Attention Mechanism [J]. Computer Science, 2022, 49(8): 113-119.
[13] SUN Qi, JI Gen-lin, ZHANG Jie. Non-local Attention Based Generative Adversarial Network for Video Abnormal Event Detection [J]. Computer Science, 2022, 49(8): 172-177.
[14] YAN Jia-dan, JIA Cai-yan. Text Classification Method Based on Information Fusion of Dual-graph Neural Network [J]. Computer Science, 2022, 49(8): 230-236.
[15] HOU Yu-tao, ABULIZI Abudukelimu, ABUDUKELIMU Halidanmu. Advances in Chinese Pre-training Models [J]. Computer Science, 2022, 49(7): 148-163.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!