计算机科学 ›› 2023, Vol. 50 ›› Issue (2): 23-31.doi: 10.11896/jsjkx.221100133

• 边缘智能协同技术及前沿应用 • 上一篇    下一篇

基于层级化数据记忆池的边缘侧半监督持续学习方法

王祥炜, 韩锐, 刘驰   

  1. 北京理工大学计算机学院 北京 100081
  • 收稿日期:2022-11-15 修回日期:2023-01-16 出版日期:2023-02-15 发布日期:2023-02-22
  • 通讯作者: 韩锐(hanrui@bit.edu.cn)
  • 作者简介:(wangxw-cn@qq.com)
  • 基金资助:
    国家自然科学基金(62132019,62272046,61872337)

Hierarchical Memory Pool Based Edge Semi-supervised Continual Learning Method

WANG Xiangwei, HAN Rui, Chi Harold LIU   

  1. School of Computer Science and Technology,Beijing Institute of Technology,Beijing 100081,China
  • Received:2022-11-15 Revised:2023-01-16 Online:2023-02-15 Published:2023-02-22
  • Supported by:
    National Natural Science Foundation of China(62132019,62272046,61872337)

摘要: 外界环境的不断变化导致基于传统深度学习方法的神经网络性能有不同程度的下降,因此持续学习技术逐渐受到了越来越多研究人员的关注。在边缘侧环境下,面向边缘智能的持续学习模型不仅需要解决灾难性遗忘问题,还需要面对资源严重受限这一巨大挑战。这一挑战主要体现在两个方面:1)难以在短时间内花费较大的人工开销进行样本标注,导致有标注样本资源不足;2)难以在边缘平台部署大量高算力设备,导致设备资源十分有限。然而,面对这些挑战,一方面,现有经典的持续学习方法通常需要大量有标注样本才能维护模型的可塑性与稳定性,标注资源的缺乏将导致其准确率明显下降;另一方面,为了应对标注资源不足的问题,半监督学习方法为了达到更高的模型准确率,往往需要付出较大的计算开销。针对这些问题,提出了一个面向边缘侧的,能够有效利用大量无标注样本及少量有标注样本的低开销的半监督持续学习方法(Edge Hierarchical Memory Learner,简称为EdgeHML)。EdgeHML通过构建层级化数据记忆池,使用多层存储结构对学习过程中的样本进行分级保存及回放,以在线与离线相结合的策略实现不同层级间的交互,帮助模型在半监督持续学习环境下学习新知识的同时更有效地回忆旧知识。同时,为了进一步降低针对无标注样本的计算开销,EdgeHML在记忆池的基础上,引入了渐进式学习的方法,通过控制模型对无标注样本的学习过程来减少无标注样本的迭代周期。实验结果表明,在CIFAR-10,CIFAR-100以及TinyImageNet这3种不同规模的数据集构建的半监督持续学习任务上,EdgeHML相比经典的持续学习方法,在标注资源严重受限的条件下最高提升了约16.35%的模型准确率;相比半监督持续学习方法,在保证模型性能的条件下最高缩短了超过50%的训练迭代时间,实现了边缘侧高性能、低开销的半监督持续学习过程。

关键词: 边缘智能, 持续学习, 半监督学习, 数据标注, 深度神经网络

Abstract: The continuous changes of the external environment lead to the performance regression of neural networksbased on traditional deep learning methods.Therefore,continual learning(CL) area gradually attracts the attention of more researchers.For edge intelligence,the CL model not only needs to overcome catastrophic forgetting,but also needs to face the huge challenge of severely limited resources.This challenge is mainly reflected in the lack of labeled resources and powerful devices.However,the existing classic CL methods usually rely on a large number of labeled samples to maintain the plasticity and stability,and the lack of labeled resources will lead to a significant accuracy drop.Meanwhile,in order to deal with the problem of insufficient annotation resources,semi-supervised learning methods often need to pay a large computational and memory overhead for higher accuracy.In response to these problems,a low-cost semi-supervised CL method named edge hierarchicalmemory learner (EdgeHML) is proposed.EdgeHML can effectively utilize a large number of unlabeled samples and a small number of labeled samples.It is based on a hierarchical memory pool,leverage multi-level storage structure to store and replay samples.EdgeHML implements the interaction between different levels through a combination of online and offline strategies.In addition,in order to further reduce the computational overhead for unlabeled samples,EdgeHML leverages a progressive learning method.It reduces the computation cycles of unlabeled samples by controlling the learning process.Experimental results show that on three semi-supervised CL tasks,EdgeHML can improve the model accuracy by up to 16.35% compared with the classic CL method,and the training iterations time can be reduced by more than 50% compared with semi-supervised methods.EdgeHML achieves a semi-supervised CL process with high performance and low overhead for edge intelligence.

Key words: Edge intelligence, Continual learning, Semi-supervised learning, Data labeling, Deep neural network

中图分类号: 

  • TP301
[1]ZHOU Z,CHEN X,LI E,et al.Edge Intelligence:Paving the Last Mile of Artificial Intelligence With Edge Computing[C]//Proceedings of the IEEE.2019:1738-1762.
[2]HAN R,LI D,OUYANG J,et al.Accurate Differentially Private Deep Learning on the Edge[J].IEEE Transactions on Parallel and Distributed Systems,2021,32(9):2231-2247.
[3]HAN R,LI S,WANG X,et al.Accelerating Gossip-Based Deep Learning in Heterogeneous Edge Computing Platforms[J].IEEE Transactions on Parallel and Distributed Systems,2021,32(7):1591-1602.
[4]YUAN Q,ZHOU H,LI J,et al.Toward Efficient Content Delivery for Automated Driving Services:An Edge Computing Solution[J].IEEE Network,2018,32(1):80-86.
[5]KIRKPATRICK J,PASCANU R,RABINOWITZ N,et al.Overcoming catastrophic forgetting in neural networks[J].Proceedings of the national academy of sciences,2017,114(13):3521-3526.
[6]HAN R,LIU C H,LI S,et al.Accelerating Deep Learning Sys-tems via Critical Set Identification and Model Compression[J].IEEE Transactions on Computers,2020,69(7):1059-1070.
[7]HAN R,ZHANG Q,LIU C H,et al.LegoDNN:block-grainedscaling of deep neural networks for mobile vision[C]//Procee-dings of the 27th Annual International Conference on Mobile Computing and Networking.New York,NY,USA:Association for Computing Machinery,2021:406-419.
[8]RUSU A A,RABINOWITZ N C,DESJARDINS G,et al.Progressive neural networks[J].arXiv:1606.04671,2016.
[9]ZENKE F,POOLE B,GANGULI S.Continual LearningThrough Synaptic Intelligence[C]//Proceedings of the 34th International Conference on Machine Learning.PMLR,2017:3987-3995.
[10]BUZZEGA P,BOSCHINI M,PORRELLO A,et al.Dark Experience for General Continual Learning:a Strong,Simple Baseline[C]//Advances in Neural Information Processing Systems.2020:15920-15930.
[11]WANG L,YANG K,LI C,et al.ORDisCo:Effective and Efficient Usage of Incremental Unlabeled Data for Semi-Supervised Continual Learning[C]//Proceedings of the IEEE/CVF Confe-rence on Computer Vision and Pattern Recognition.2021:5383-5392.
[12]XIE Q,DAI Z,HOVY E,et al.Unsupervised Data Augmentation for Consistency Training[C]//Advances in Neural Information Processing Systems:Vol.33.Curran Associates,2020:6256-6268.
[13]GOUK H,HOSPEDALES T M,PONTIL M.Distance-BasedRegularisation of Deep Networks for Fine-Tuning[J].arXiv:2002.08253,2020.
[14]SHI Y,YUAN L,CHEN Y,et al.Continual Learning via Bit-Level Information Preserving[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:16674-16683.
[15]JUNG S,AHN H,CHA S,et al.Continual Learning with Node-Importance based Adaptive Group Sparse Regularization[C]//Advances in Neural Information Processing Systems.2020:3647-3658.
[16]SINGH P,MAZUMDER P,RAI P,et al.Rectification-BasedKnowledge Retention for Continual Learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:15282-15291.
[17]KAPOOR S,KARALETSOS T,BUI T D.Variational Auto-Regressive Gaussian Processes for Continual Learning[C]//Proceedings of the 38th International Conference on Machine Learning.PMLR,2021:5290-5300.
[18]RATCLIFF R.Connectionist models of recognition memory:constraints imposed by learning and forgetting functions[J].Psychological review,1990,97(2):285.
[19]SAHA G,GARG I,ROY K.Gradient projection memory for continual learning[J].arXiv:2103.09762,2021.
[20]SHIM D,MAI Z,JEONG J,et al.Online Class-Incremental Continual Learning with Adversarial Shapley Value[C]//Procee-dings of the AAAI Conference on Artificial Intelligence.2021:9630-9638.
[21]ALJUNDI R,CACCIA L,BELILOVSKY E,et al.Online Continual Learning with Maximally Interfered Retrieval[J].arXiv:1908.04742,2019.
[22]ALJUNDI R,LIN M,GOUJAUD B,et al.Gradient based sample selection for online continual learning[J].arXiv:1903.08671,2019.
[23]BORSOS Z,MUTNY M,KRAUSE A.Coresets via Bilevel Optimization for Continual Learning and Streaming[C]//Advances in Neural Information Processing Systems.2020:14879-14890.
[24]BANG J,KIM H,YOO Y,et al.Rainbow Memory:ContinualLearning With a Memory of Diverse Samples[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:8218-8227.
[25]LEE S,HA J,ZHANG D,et al.A neural dirichlet process mixture model for task-free continual learning[J].arXiv:2001.00689,2020.
[26]HU W,QIN Q,WANG M,et al.Continual Learning by Using Information of Each Class Holistically[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2021:7797-7805.
[27]VERMA V K,LIANG K J,MEHTA N,et al.Efficient Feature Transformations for Discriminative and Generative Continual Learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:13865-13875.
[28]DERAKHSHANI M M,ZHEN X,SHAO L,et al.Kernel Continual Learning[C]//Proceedings of the 38th International Conference on Machine Learning.PMLR,2021:2621-2631.
[29]JOSEPH K J,BALASUBRAMANIAN V N.Meta-Consolida-tion for Continual Learning[C]//Advances in Neural Information Processing Systems.2020:14374-14386.
[30]WANG S,LI X,SUN J,et al.Training Networks in Null Space of Feature Covariance for Continual Learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2021:184-193.
[31]DAVARI M,ASADI N,MUDUR S,et al.Probing Representation Forgetting in Supervised and Unsupervised Continual Learning[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).2022:16691-16700.
[32]MADAAN D,YOON J,LI Y,et al.Representational Continuity for Unsupervised Continual Learning[C]//International Confe-rence on Learning Representations.2022.
[33]WU Z,XIONG Y,YU S X,et al.Unsupervised Feature Lear-ning via Non-Parametric Instance Discrimination[C]//Procee-dings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:3733-3742.
[34]JAVED K,WHITE M.Meta-Learning Representations for Continual Learning[C]//Advances in Neural Information Proces-sing Systems:Vol.32.Curran Associates,2019.
[35]LI K H.Reservoir-sampling algorithms of time complexity O(n(1+log(N/n)))[J].ACM Transactions on Mathematical Software (TOMS),1994,20(4):481-493.
[36]SOHN K,BERTHELOT D,CARLINI N,et al.Fixmatch:Simplifying semi-supervised learning with consistency and confidence[C]//Advances in Neural Information Processing Systems.2020:596-608.
[37]KRIZHEVSKY A,HINTON G.Learning multiple layers offeatures from tiny images[J/OL].https://typeset.io/papers/learning-multiple-layers-of-features-from-tiny-images-5eum9uf4g8.
[38]LE Y,YANG X.Tiny imagenet visual recognition challenge[J/OL].http://vision.stanford.edu/teaching/cs231n/reports/2015/pdfs/yle_project.pdf.
[39]DENG J,DONG W,SOCHER R,et al.ImageNet:A large-scalehierarchical image database[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition.2009:248-255.
[40]DE LANGE M,ALJUNDI R,MASANA M,et al.A Continual Learning Survey:Defying Forgetting in Classification Tasks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2022,44(7):3366-3385.
[41]ZHANG B,WANG Y,HOU W,et al.FlexMatch:BoostingSemi-Supervised Learning with Curriculum Pseudo Labeling[C]//Advances in Neural Information Processing Systems:Vol.34.Curran Associates,2021:18408-18419.
[42]HE K,ZHANG X,REN S,et al.Deep Residual Learning forImage Recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:770-778.
[1] 饶丹, 时宏伟.
基于深度聚类的航空交通流识别与异常检测研究
Study on Air Traffic Flow Recognition and Anomaly Detection Based on Deep Clustering
计算机科学, 2023, 50(3): 121-128. https://doi.org/10.11896/jsjkx.220100086
[2] 李海涛, 王瑞敏, 董卫宇, 蒋烈辉.
一种基于GRU的半监督网络流量异常检测方法
Semi-supervised Network Traffic Anomaly Detection Method Based on GRU
计算机科学, 2023, 50(3): 380-390. https://doi.org/10.11896/jsjkx.220100032
[3] 李晓欢, 陈璧韬, 康嘉文, 叶进.
数字孪生辅助边缘智能中基于联盟博弈的联合资源优化
Coalition Game-assisted Joint Resource Optimization for Digital Twin-assisted Edge Intelligence
计算机科学, 2023, 50(2): 42-49. https://doi.org/10.11896/jsjkx.221100123
[4] 刘兴光, 周力, 刘琰, 张晓瀛, 谭翔, 魏急波.
基于边缘智能的频谱地图构建与分发方法
Construction and Distribution Method of REM Based on Edge Intelligence
计算机科学, 2022, 49(9): 236-241. https://doi.org/10.11896/jsjkx.220400148
[5] 武红鑫, 韩萌, 陈志强, 张喜龙, 李慕航.
监督和半监督学习下的多标签分类综述
Survey of Multi-label Classification Based on Supervised and Semi-supervised Learning
计算机科学, 2022, 49(8): 12-25. https://doi.org/10.11896/jsjkx.210700111
[6] 侯夏晔, 陈海燕, 张兵, 袁立罡, 贾亦真.
一种基于支持向量机的主动度量学习算法
Active Metric Learning Based on Support Vector Machines
计算机科学, 2022, 49(6A): 113-118. https://doi.org/10.11896/jsjkx.210500034
[7] 庞兴龙, 朱国胜.
基于半监督学习的网络流量分析研究
Survey of Network Traffic Analysis Based on Semi Supervised Learning
计算机科学, 2022, 49(6A): 544-554. https://doi.org/10.11896/jsjkx.210600131
[8] 王宇飞, 陈文.
基于DECORATE集成学习与置信度评估的Tri-training算法
Tri-training Algorithm Based on DECORATE Ensemble Learning and Credibility Assessment
计算机科学, 2022, 49(6): 127-133. https://doi.org/10.11896/jsjkx.211100043
[9] 焦翔, 魏祥麟, 薛羽, 王超, 段强.
基于深度学习的自动调制识别研究
Automatic Modulation Recognition Based on Deep Learning
计算机科学, 2022, 49(5): 266-278. https://doi.org/10.11896/jsjkx.211000085
[10] 高捷, 刘沙, 黄则强, 郑天宇, 刘鑫, 漆锋滨.
基于国产众核处理器的深度神经网络算子加速库优化
Deep Neural Network Operator Acceleration Library Optimization Based on Domestic Many-core Processor
计算机科学, 2022, 49(5): 355-362. https://doi.org/10.11896/jsjkx.210500226
[11] 许华杰, 陈育, 杨洋, 秦远卓.
基于混合样本自动数据增强技术的半监督学习方法
Semi-supervised Learning Method Based on Automated Mixed Sample Data Augmentation Techniques
计算机科学, 2022, 49(3): 288-293. https://doi.org/10.11896/jsjkx.210100156
[12] 解宇, 杨瑞玲, 刘公绪, 李德玉, 王文剑.
基于动态拓扑图的人体骨架动作识别算法
Human Skeleton Action Recognition Algorithm Based on Dynamic Topological Graph
计算机科学, 2022, 49(2): 62-68. https://doi.org/10.11896/jsjkx.210900059
[13] 赵宏, 常有康, 王伟杰.
深度神经网络的对抗攻击及防御方法综述
Survey of Adversarial Attacks and Defense Methods for Deep Neural Networks
计算机科学, 2022, 49(11A): 210900163-11. https://doi.org/10.11896/jsjkx.210900163
[14] 钱栋炜, 崔阳光, 魏同权.
基于深度神经网络与联邦学习的污染物浓度预测二次建模
Secondary Modeling of Pollutant Concentration Prediction Based on Deep Neural Networks with Federal Learning
计算机科学, 2022, 49(11A): 211200084-5. https://doi.org/10.11896/jsjkx.211200084
[15] 金玉杰, 初旭, 王亚沙, 赵俊峰.
变分推断域适配驱动的城市街景语义分割
Variational Domain Adaptation Driven Semantic Segmentation of Urban Scenes
计算机科学, 2022, 49(11): 126-133. https://doi.org/10.11896/jsjkx.220500193
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!