Computer Science ›› 2024, Vol. 51 ›› Issue (7): 257-271.doi: 10.11896/jsjkx.240100045

• Artificial Intelligence • Previous Articles     Next Articles

Lightweight Deep Neural Network Models for Edge Intelligence:A Survey

XU Xiaohua1,2, ZHOU Zhangbing1, HU Zhongxu2, LIN Shixun2, YU Zhenjie1,3   

  1. 1 School of Information Engineering,China University of Geosciences(Beijing),Beijing 100083,China
    2 Information Technology Education Center,Zhaotong University,Zhaotong,Yunnan 657000,China
    3 School of Computer Information Engineering,Changzhou Institute of Technology,Changzhou,Jiangsu 213000,China
  • Received:2024-01-02 Revised:2024-04-27 Online:2024-07-15 Published:2024-07-10
  • About author:XU Xiaohua,born in 1980,associate professor.His main research interests include deep learning and edge computing.
    ZHOU Zhangbing,born in 1974,Ph.D,professor.is a member of CCF(No.28475M).His main research interests include service computing and edge computing.
  • Supported by:
    China Geological Survey(CGS) Work Project:Geoscience Literature Knowledge Services and Decision Supporting(DD20230139) and Special Basic Cooperative Research Programs of Yunnan Provincial Undergraduate Universities Association(202301BA070001-095).

Abstract: With the rapid development of the Internet of Things(IoT) and artificial intelligence(AI),the combination of edge computing and AI has given rise to a new research field called edge intelligence.Edge intelligence possess appropriate computing power and can provide real-time,efficient,and intelligent responses.It has significant applications in areas such as smart cities,industrial IoT,smart healthcare,autonomous driving,and smart homes.In order to improve the accuracy of models,traditional deep neural networks often adopt deeper and larger architectures,resulting in significant increases in model parameters,storage requirements,and computational complexity.However,due to the limitations of IoT terminal devices in terms of computing power,storage space,and energy resources,deep neural networks are difficult to be directly deployed on these devices.Therefore,lightweight deep neural networks with low memory,low computational resources,high accuracy,and real-time inference capability have become a research hotspot.This paper first reviews the development process of edge intelligence and analyzes the practical requirements for lightweight deep neural networks to adapt to intelligent terminals.Two methods for constructing lightweight deep neural network models:model compression techniques and lightweight architecture design are proposed.Next,it discusses in detail five main model compression techniques:parameter pruning,parameter quantization,low-rank decomposition,knowledge distillation,and mixed compression techniques.It summarizes their respective performance advantages and limitations,and eva-luates their compression effects on commonly used datasets.Then,the paper analyzes in depth the strategies of adjusting the size of the convolution kernel,reducing input channel number,decomposing convolution operations,and adjusting convolution width in lightweight architecture design,and compares several commonly used lightweight network models.Finally,the future research direction of lightweight deep neural networks in the field of edge intelligence is prospected.

Key words: Edge intelligence, Deep neural networks, Lightweight neural network, Model compression, Lightweight architecture design

CLC Number: 

  • TP183
[1]LEI B,ZHOU J,MA M,et al.DQN based Blockchain Data Sto-rage in Resource-constrained IoT System[C]//2023 IEEE Wireless Communications and Networking Conference(WCNC).IEEE,2023:1-6.
[2]CAO K,LIU Y,MENG G,et al.An overview on edge compu-ting research[J].IEEE Access,2020,8:85714-85728.
[3]GHAZNAVI M,JALALPOUR E,SALAHUDDIN M A,et al.Content delivery network security:A survey[J].IEEE Communications Surveys & Tutorials,2021,23(4):2166-2190.
[4]DAS R,INUWA M M.A review on fog computing:issues,cha-racteristics,challenges,and potential applications[J].Telematics and Informatics Reports,2023,10(1):1-20.
[5]BABAR M,KHAN M S,ALI F,et al.Cloudlet computing:re-cent advances,taxonomy,and challenges[J].IEEE Access,2021,9:29609-29622.
[6]ZHENG F,ZHU D W,ZANG W Q,et al.Edge Computing:Review and Application Research on New Computing Paradigm[J].Journal of Frontiers of Computer Science and Technology,2020,14(4):541-553.
[7]SHI W,CAO J,ZHANG Q,et al.Edge computing:Vision and challenges[J].IEEE Internet of Things Journal,2016,3(5):637-646.
[8]ZHANG X,WANG Y,LU S,et al.OpenEI:An open frameworkfor edge intelligence[C]//2019 IEEE 39th International Confe-rence on Distributed Computing Systems(ICDCS).IEEE,2019:1840-1851.
[9]ZHOU Z,CHEN X,LI E,et al.Edge intelligence:Paving the last mile of artificial intelligence with edge computing[J].Proceedings of the IEEE,2019,107(8):1738-1762.
[10]DENG S,ZHAO H,FANG W,et al.Edge intelligence:The confluence of edge computing and artificial intelligence[J].IEEE Internet of Things Journal,2020,7(8):7457-7469.
[11]SUBHASHINI R,KHANG A.The role of Internet of Things(IoT) in smart city framework[M]//Smart Cities.CRC Press,2023:31-56.
[12]SINGH A,SAINI K,NAGAR V,et al.Artificial intelligence in edge devices[M]//Advances in Computers.Elsevier,2022,127:437-484.
[13]KRIZHEVSKY A,SUTSKEVER I,HINTON G E.ImageNetclassification with deep convolutional neural networks[J].Communications of the ACM,2017,60(6):84-90.
[14]SUBRAMANIAN M,LV N P,VE S.Hyperparameter optimization for transfer learning of VGG16 for disease identification in corn leaves using Bayesian optimization[J].Big Data,2022,10(3):215-229.
[15]JAWOREK-KORJAKOWSKA J,KLECZEK P,GORGON M.Melanoma Thickness Prediction Based on Convolutional Neural Network With VGG-19 Model Transfer Learning[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops(CVPRW).IEEE Computer Society,2019:2748-2756.
[16]SHAFIQ M,GU Z.Deep residual learning for image recognition:A survey[J].Applied Sciences,2022,12(18):8972.
[17]PANDA M K,SUBUDHI B N,VEERAKUMAR T,et al.Modified ResNet-152 Network With Hybrid Pyramidal Pooling for Local Change Detection[J].IEEE Transactions on Artificial Intelligence,2023,1(1):1-14.
[18]YANG Z,ZHANG H.Comparative Analysis of StructuredPruning and Unstructured Pruning[C]//International Confe-rence on Frontier Computing.Singapore:Springer Nature Singapore,2021:882-889.
[19]HAN S,MAO H,DALLY W J.Deep compression:Compressing deep neural networks with pruning,trained quantization and huffman coding[J].arXiv:1510.00149,2015.
[20]MOLCHANOV P,TYREE S,KARRAS T,et al.Pruning con-volutional neural networks for resource efficient inference[J].arXiv:1611.06440,2016.
[21]DONG X,YANG Y.Network Pruning via Transformable Architecture Search[J].arXiv:1905.09717,2019.
[22]SAKAI Y,ETO Y,TERANISHI Y.Structured pruning for deep neural networks with adaptive pruning rate derivation based on connection sensitivity and loss function[J].Journal of Advances in Information Technology,2022,13(3):295-300.
[23]CHOI Y,EL-KHAMY M,LEE J.Compression of deep convolutional neural networks under joint sparsity constraints[J].ar-Xiv:1805.08303,2018.
[24]WANG M,TANG J,ZHAO H,et al.Automatic Compression of Neural Network with Deep Reinforcement Learning Based on Proximal Gradient Method[J].Mathematics,2023,11(2):338.
[25]ZHAO R,LUK W.Efficient Structured Pruning and Architec-ture Searching for Group Convolution[C]//IEEE/CVF International Conference on Computer Vision Workshop(ICCVW 2019).IEEE Computer Society,2019:1961-1970.
[26]XU K,WANG Z,GENG X,et al.Efficient joint optimization of layer-adaptive weight pruning in deep neural networks[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2023:17447-17457.
[27]ANWAR S,HWANG K,SUNG W.Structured pruning of deep convolutional neural networks[J].ACM Journal on Emerging Technologies in Computing Systems(JETC),2017,13(3):1-18.
[28]LOUATI H,LOUATI A,BECHIKH S,et al.Embedding channel pruning within the CNN architecture design using a bi-level evolutionary approach[J].The Journal of Supercomputing,2023,79(14):16118-16151.
[29]XIA M,ZHONG Z,CHEN D.Structured pruning learns compact and accurate models[J].arXiv:2204.00408,2022.
[30]ECCLES B J,RODGERS P,KILPATRICK P,et al.DNNShif-ter:An efficient DNN pruning system for edge computing[J].Future Generation Computer Systems,2024,152:43-54.
[31]SUN X,SHI H.Towards Better Structured Pruning Saliency by Reorganizing Convolution[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.2024:2204-2214.
[32]QIAN Y,HUANG W,YU Q,et al.Robust Filter PruningGuided by Deep Frequency-Features for Edge Intelligence[J/OL].https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4691079.
[33]BASHAS H S,FARAZUDDIN M,PULABAIGARI V,et al.Deep model compression based on the training history[J].Neurocomputing,2024,573:127257.
[34]KIM S,HOOPER C,WATTANAWONG T,et al.Full stack optimization of transformer inference:a survey[J].arXiv:2302.14017,2023.
[35]RASTEGARI M,ORDONEZ V,REDMON J,et al.Xnor-net:Imagenet classification using binary convolutional neural networks[C]//European Conference on Computer Vision.Cham:Springer International Publishing,2016:525-542.
[36]VORABBI L,MALTONI D,SANTI S.On-Device Learningwith Binary Neural Networks[C]//International Conference on Image Analysis and Processing.Cham:Springer Nature Switzer-land,2023:39-50.
[37]LIU B,LI F,WANG X,et al.Ternary weight networks[C]//2023 IEEE International Conference on Acoustics,Speech and Signal Processing(ICASSP 2023).IEEE,2023:1-5.
[38]RAZANI R,MORIN G,SARI E,et al.Adaptive binary-ternary quantization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:4613-4618.
[39]CHEN W,QIU H,ZHUANG J,et al.Quantization of deep neural networks for accurate edge computing[J].ACM Journal on Emerging Technologies in Computing Systems(JETC),2021,17(4):1-11.
[40]TMAMNA J,AYED E B,FOURATIR,et al.Bare-Bones particle Swarm optimization-based quantization for fast and energy efficient convolutional neural networks[J].Expert Systems,2023(12):1.
[41]SCHAEFER C J S,JOSHI S,LI S,et al.Edge inference with fully differentiable quantized mixed precision neural networks[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.2024:8460-8469.
[42]ZHANG H,YAO B,SHAO W,et al.Mixed Precision Quantized Neural Network Accelerator for Remote Sensing Images Classification[C]//2023 IEEE 16th International Conference on Electronic Measurement & Instruments(ICEMI).IEEE,2023:172-176.
[43]LI B,WANG L,WANG Y,et al.Mixed-Precision NetworkQuantization for Infrared Small Target Segmentation[J].IEEE Transactions on Geoscience and Remote Sensing,2024,62:3346904.
[44]WANG Y Z,GUO B,WANG H L,et al.Adaptive ModelQuantization Method for Intelligent Internet of Things Terminal[J].Computer Science,2023,50(11):306-316.
[45]HUANG C,LIU P,FANG L.MXQN:Mixed quantization for reducing bit-width of weights and activations in deep convolutional neural networks[J].Applied Intelligence,2021,51:4561-4574.
[46]KUNDU S,WANG S,SUN Q,et al.Bmpq:bit-gradient sensitivity-driven mixed-precision quantization of dnns from scratch[C]//2022 Design,Automation & Test in Europe Conference & Exhibition(DATE).IEEE,2022:588-591.
[47]LOUIZOS C,REISSER M,BLANKEVOORT T,et al.Relaxed quantization for discretized neural networks[J].arXiv:1810.01875,2018.
[48]LANE N D,BHATTACHARYA S,GEORGIEV P,et al.Deepx:A software accelerator for low-power deep learning inference on mobile devices[C]//2016 15th ACM/IEEE International Conference on Information Processing in Sensor Networks(IPSN).IEEE,2016:1-12.
[49]FANG S,KIRBY R M,Zhe S.Bayesian streaming sparse Tucker decomposition[C]//Uncertainty in Artificial Intelligence.PMLR,2021:558-567.
[50]ERICHSON N B,MANOHAR K,BRUNTON S L,et al.Ran-domized CP tensor decomposition[J].Machine Learning:Science and Technology,2020,1(2):025012.
[51]SWAMINATHAN S,GARG D,KANNAN R,et al.Sparse low rank factorization for deep neural network compression[J].Neurocomputing,2020,398:185-196.
[52]BAO X,LIANG J,XIA Y,et al.Low-rank decomposition fabric defect detection based on prior and total variation regularization[J].The Visual Computer,2022,38(8):2707-2721.
[53]CHENG T,TONG X,ZHANG Y,et al.Convolutional neuralnetworks with low-rank regularization[J].arXiv:1511.06067,2015.
[54]XIAO J,ZHANG C,GONG Y,et al.HALOC:Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks[J].arXiv:2301.09422,2023.
[55]IDELBAYEV Y,CARREIRA-PERPINáN M A.Low-rank compression of neural nets:Learning the rank of each layer[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:8049-8059.
[56]MENG X F,LIU F,LI G,et al.Review of Knowledge Distillation in Convolutional Neural Network Compression[J].Journal of Frontiers of Computer Science and Technology,2021,15(10):1812-1829.
[57]GENG L L,NIU B N.Survey of Deep Neural Networks Model Compression[J].Journal of Frontiers of Computer Science and Technology,2020,14(9):1441-1455.
[58]SI Z F,QI H G.Survey on knowledge distillation and its application[J].Journal of Image and Graphics,2023,28(9):2817-2832.
[59]HINTON G,VINYALS O,DEAN J.Distilling the knowledge in a neural network[J].arXiv:1503.02531,2015.
[60]ROMERO A,BALLAS N,KAHOU S E,et al.Fitnets:Hintsfor thin deep nets[J].arXiv:1412.6550,2014.
[61]ZAGORUYKO S,KOMODAKIS N.Paying more attention toattention:Improving the performance of convolutional neural networks via attention transfer[J].arXiv:1612.03928,2016.
[62]ZHANG H L,CHEN D F,WANG C.Confidence-aware multi-teacher knowledge distillation[C] //IEEE International Confe-rence on Acoustics,Speech and Signal Processing(ICASSP 2022).IEEE,2022.
[63]CHOI J,CHO H,CHEUNG S,et al.ORC:Network group-based knowledge distillation using online role change[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2023:17381-17390.
[64]QIAN Y G,MA J,HE N N,et al.Two-stage AdversarialKnowledge Transfer for Edge Intelligence[J].Journal of Software,2022,33(12):4504-4516.
[65]MISHRA R,GUPTA H P.Designing and training of light-weight neural networks on edge devices using early halting in knowledge distillation[J].IEEE Transactions on Mobile Computing,2024(25):4665-4677.
[66]GOU J,HU Y,SUN L,et al.Collaborative knowledge distillation via filter knowledge transfer[J].Expert Systems with Applications,2024,238:121884.
[67]LI T,LI J,LIU Z,et al.Few sample knowledge distillation for efficient network compression[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:14639-14647.
[68]PHAM C,NGUYEN V A,LE T,et al.Frequency Attention forKnowledge Distillation[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision.2024:2277-2286.
[69]ZHANG Y,XIANG T,HOSPEDALES T M,et al.Deep mutual learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:4320-4328.
[70]HUANG Z H,YANG X Y,YU J,et al.Mutual LearningKnowledge Distillation Based on Multi-stage Multi-generative Adversarial Network[J].Computer Science,2022,49(10):169-175.
[71]MIRZADEH S I,FARAJTABAR M,LI A,et al.Improvedknowledge distillation via teacher assistant[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2020:5191-5198.
[72]KWON S J,LEE D,KIM B,et al.Structured compression by weight encryption for unstructured pruning and quantization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:1909-1918.
[73]QU X,WANG J,XIAO J.Quantization and knowledge distil-lation for efficient federated learning on edge devices[C]//2020 IEEE 22nd International Conference on High Performance Computing and Communications; IEEE 18th International Confe-rence on Smart City;IEEE 6th International Conference on Data Science and Systems(HPCC/SmartCity/DSS).IEEE,2020:967-972.
[74]CHANG W T,KUO C H,FANG L C.Variational Channel Distribution Pruning and Mixed-Precision Quantization for Neural Network Model Compression[C]//2022 International Sympo-sium on VLSI Design,Automation and Test(VLSI-DAT).IEEE,2022:1-3.
[75]BAI S,CHEN J,SHEN X,et al.Unified Data-Free Compression:Pruning and Quantization without Fine-Tuning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2023:5876-5885.
[76]WANG W,ZHOU X,JIANG C,et al.A Lightweight Identifica-tion Method for Complex Power Industry Tasks Based on Knowledge Distillation and Network Pruning[J].Processes,2023,11(9):2780.
[77]YU P H,WU S S,KLOPP J P,et al.Joint pruning & quantiza-tion for extremely sparse neural networks[J].arXiv:2010.01892,2020.
[78]KIM J,CHANG S,KWAK N.PQK:model compression viapruning,quantization,and knowledge distillation[J].arXiv:2106.14681,2021.
[79]KRIZHEVSKY A,SUTSKEVER I,HINTON G E.Imagenetclassification with deep convolutional neural networks[J].Advances in Neural Information Processing Systems,2012(25):1-9.
[80]SIMONYAN K,ZISSERMAN A.Very deep convolutional networks for large-scale image recognition[J].arXiv:1409.1556,2014.
[81]FANG L L,WANG X.Brain tumor segmentation based on thedual-path network of multi-modal MRI images[J].Pattern Re-cognition,2022(124):108434.
[82]LI D C,LI L,CHEN Z Z,et al.Shift-ConvNets:Small Convolutional Kernel with Large Kernel Effects[J].arXiv:2401.12736,2024.
[83]RONG Y Y,WU X,ZHANG Y M.Classification of motorimagery electroencephalography signals using continuous small convolutional neural network[J].International Journal of Imaging Systems and Technology,2020,30(3):653-659.
[84]IANDOLA F N,HAN S,MOSKEWICZ M W,et al.Sque-ezeNet:AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size[J].arXiv:1602.07360,2016.
[85]LI,X,LONG R J,YAN J,et al.TANet:a tiny plankton classification network formobile devices[J].Mobile Information Systems,2019(3):1-8.
[86]KATHIRGAMARAJA P,KAMALAKKANNAN K,RATNA-SEGAR N,et al.Edgenet:Squeezenet like convolution neural network on embedded fpga[C]//2018 25th IEEE International Conference on Electronics,Circuits and Systems(ICECS).IEEE,2018:81-84.
[87]MINU S,SUBASHKA R.Optimal Squeeze Net with Deep Neural Network-Based Arial Image Classification Model in Unmanned Ae-rial Vehicles[J].Traitement duSignal,2022,39(1):275-281.
[88]HOWARD A G,ZHU M,CHEN B,et al.Mobilenets:Efficient convolutional neural networks for mobile vision applications[J].arXiv:1704.04861,2017.
[89]SANDLER M,HOWARD A,ZHU M,et al.Mobilenetv2:In-verted residuals and linear bottlenecks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:4510-4520.
[90]KOONCE B,KOONCE B.MobileNetV3[J].Convolutional Neural Networks with Swift for Tensorflow:Image Recognition and Dataset Categorization,2021(5):125-144.
[91]HU J,SHEN L,SUN G.Squeeze-and-excitation networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:7132-7141.
[92]CHOLLET F.XCEPTION:Deep learning with depthwise separable convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:1251-1258.
[93]GHOLAMI A,KWON K,WU B,et al.Squeezenext:Hardware-aware neural network design[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition workshops.2018:1638-1647.
[94]FREEMAN I,ROESE-KOERNER L,KUMMERT A.Effnet:An efficient structure for convolutional neural networks[C]//2018 25thIEEE International Conference on Image Processing(ICIP).IEEE,2018:6-10.
[95]ZHANG X,ZHOU X,LIN M,et al.Shufflenet:An extremely efficient convolutional neural network for mobile devices[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:6848-6856.
[96]XIE S,GIRSHICK R,DOLLÁR P,et al.Aggregated residualtransformations for deep neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:1492-1500.
[97]SZEGEDY C,LIU W,JIA Y,et al.Going deeper with convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2015:1-9.
[98]HE K,ZHANG X,REN S,et al.Deep residual learning forimage recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[99]HE K,ZHANG X,REN S,et al.Identity mappings in deep re-sidual networks[C]//Computer Vision-ECCV 2016:14th European Conference,Amsterdam Springer International Publi-shing,2016:630-645.
[100]SZEGEDY C,VANHOUCKE V,IOFFE S,et al.Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2818-2826.
[101]MA N,ZHANG X,ZHENG H T,et al.Shufflenet v2:Practical guidelines for efficient cnn architecture design[C]//Proceedings of the European Conference on Computer Vision(ECCV).2018:116-131.
[102]WOO S,PARK J,LEE J Y,et al.Cbam:Convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision(ECCV).2018:3-19.
[103]WANG Q,WU B,ZHU P,et al.ECA-Net:Efficient channel attention for deep convolutional neural networks[C]//Procee-dings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2020:11534-11542.
[104]ZHANG F,LI D,LI S,et al.A Lightweight Tire Tread Image Classification Network[C]//2022 IEEE International Confe-rence on Visual Communications and Image Processing(VCIP).IEEE,2022:1-5.
[105]BASHA S H S,GOWDA S N,DAKALA J.A simple hybrid fil-ter pruning for efficient edge inference[C]//2022 IEEE International Conference on Acoustics,Speech and Signal Processing(ICASSP 2022).IEEE,2022:3398-3402.
[1] HAN Bing, DENG Lixiang, ZHENG Yi, REN Shuang. Survey of 3D Point Clouds Upsampling Methods [J]. Computer Science, 2024, 51(7): 167-196.
[2] GAO Yang, CAO Yangjie, DUAN Pengsong. Lightweighting Methods for Neural Network Models:A Review [J]. Computer Science, 2024, 51(6A): 230600137-11.
[3] LI Wenting, XIAO Rong, YANG Xiao. Improving Transferability of Adversarial Samples Through Laplacian Smoothing Gradient [J]. Computer Science, 2024, 51(6A): 230800025-6.
[4] LI Le, LIU Meifang, CHEN Rong, WEI Siyu. Study on Collaborative Control Method of Vehicle Platooning Based on Edge Intelligence [J]. Computer Science, 2024, 51(6): 384-390.
[5] SUN Jing, WANG Xiaoxia. Convolutional Neural Network Model Compression Method Based on Cloud Edge Collaborative Subclass Distillation [J]. Computer Science, 2024, 51(5): 313-320.
[6] LU Yanfeng, WU Tao, LIU Chunsheng, YAN Kang, QU Yuben. Survey of UAV-assisted Energy-Efficient Edge Federated Learning [J]. Computer Science, 2024, 51(4): 270-279.
[7] LIU Yubo, GUO Bin, MA Ke, QIU Chen, LIU Sicong. Design of Visual Context-driven Interactive Bot System [J]. Computer Science, 2023, 50(9): 260-268.
[8] Yifei ZOU, Senmao QI, Cong'an XU, Dongxiao YU. Distributed Weighted Data Aggregation Algorithm in End-to-Edge Communication Networks Based on Multi-armed Bandit [J]. Computer Science, 2023, 50(2): 13-22.
[9] WANG Xiangwei, HAN Rui, Chi Harold LIU. Hierarchical Memory Pool Based Edge Semi-supervised Continual Learning Method [J]. Computer Science, 2023, 50(2): 23-31.
[10] LI Xiaohuan, CHEN Bitao, KANG Jiawen, YE Jin. Coalition Game-assisted Joint Resource Optimization for Digital Twin-assisted Edge Intelligence [J]. Computer Science, 2023, 50(2): 42-49.
[11] REN Shuyao, SONG Jiangling, ZHANG Rui. Early Screening Method for Depression Based on EEG Signal [J]. Computer Science, 2023, 50(11A): 221100139-6.
[12] LIU Xing-guang, ZHOU Li, LIU Yan, ZHANG Xiao-ying, TAN Xiang, WEI Ji-bo. Construction and Distribution Method of REM Based on Edge Intelligence [J]. Computer Science, 2022, 49(9): 236-241.
[13] CHU Yu-chun, GONG Hang, Wang Xue-fang, LIU Pei-shun. Study on Knowledge Distillation of Target Detection Algorithm Based on YOLOv4 [J]. Computer Science, 2022, 49(6A): 337-344.
[14] CHENG Xiang-ming, DENG Chun-hua. Compression Algorithm of Face Recognition Model Based on Unlabeled Knowledge Distillation [J]. Computer Science, 2022, 49(6): 245-253.
[15] ZOU Sai-lan, LI Zhuo, CHEN Xin. Study on Transmission Optimization for Hierarchical Federated Learning [J]. Computer Science, 2022, 49(12): 5-16.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!