计算机科学 ›› 2025, Vol. 52 ›› Issue (3): 326-337.doi: 10.11896/jsjkx.240900070
王冬芝1, 刘琰1, 郭斌1, 於志文1,2
WANG Dongzhi1, LIU Yan1, GUO Bin1, YU Zhiwen1,2
摘要: 移动边缘计算因具有通信成本低、服务响应快等优势,已经成为适应智能物联网应用需求的重要计算模式。在实际应用场景中,一方面,单一设备能够获取到的数据通常有限;另一方面,边缘计算环境通常是动态多变的。针对以上问题,主要对边缘联邦持续学习展开研究,将脉冲神经网络(SNN)创新性地引入到边缘联邦持续学习框架中,在降低设备计算和通信资源消耗的同时,解决本地设备在动态边缘环境中所面临的灾难性遗忘问题。利用SNN解决边缘联邦持续学习问题主要面临两个方面的挑战:首先,传统脉冲神经网络没有考虑持续增加的输入数据,难以在较长的时间跨度内存储和更新知识,导致无法实现有效的持续学习;其次,不同设备学习到的SNN模型存在差异,通过传统联邦聚合获得的全局模型无法在每个边缘设备上取得较好的性能。因此,提出了一种新的脉冲神经网络增强的边缘联邦持续学习(SNN-Enhanced Edge-FCL)方法。针对挑战一,提出了面向边缘设备的类脑持续学习算法,在单个设备上采用类脑脉冲神经网络进行本地训练,同时采用基于羊群效应的样本选择策略保存历史任务的代表样本;针对挑战二,提出了多设备协同的全局自适应聚合算法,基于SNN工作原理设计脉冲数据质量指标,并利用数据驱动的动态加权聚合方法,在全局模型聚合时对不同设备模型赋予相应权重以提升全局模型的泛化性。实验结果表明,相比基于传统神经网络的边缘联邦持续学习方法,SNN-Enhanced Edge-FCL方法在边缘设备上消耗的通信资源和计算资源减少了92%,且边缘设备在测试集上5个连续任务中的准确率都在87%以上。
中图分类号:
[1]SATYANARAYANAN A,VICTOR BAHL V,CÁCERES R,et al.The Case for VM-based Cloudlets in Mobile Computing[J].IEEE Pervasive Computing,2009,8(4):14-23. [2]ZHOU C X,SUN Y,WANG D G,et al.Survey of federatedlearning research[J].Chinese Journal of Network and Information Security,2021,7(5):77-92. [3]ZHOU Z H.Learnware:on the future of machine learning[J].Frontiers of Computer Science,2016,10(4):589-590. [4]ZHANG Y J,YAN Y H,ZHAO P,et al.Towards enablinglearnware to handle unseen jobs[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2021,35(12):10964-10972. [5]MCCLOSKEY M,COHEN N J.Catastrophic interference inconnectionist networks:The sequential learning problem[M]//Psychology ofLearning and Motivation.Academic Press,1989,24:109-165. [6]MCCLELLAND J L,MCNAUGHTON B L,O'REILLY R C.Why there are complementary learning systems in the hippocampus and neocortex:insights from the successes and failures of connectionist models of learning and memory[J].Psychological Review,1995,102(3):419. [7]YOON J,JEONG W,LEE G,et al.Federated continual learning with weighted inter-client transfer[C]//International Confe-rence on Machine Learning.PMLR,2021:12073-12086. [8]DONG J H,WANG L X,FANG Z,et al.Federated class-incremental learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:10164-10173. [9]VADERA S,AMEEN S.Methods for Pruning Deep Neural Networks[J].arXiv:2011.00241,2020. [10]MOLCHANOV P,TYREE S,KARRAS T,et al.Pruning Con-volutional Neural Networks for Resource Efficient Transfer Learning[J].arXiv:1611.06440,2016. [11]THAPA C,CHAMIKARA M A P,CAMTEPE S.SplitFed:When Federated Learning Meets Split Learning[J].arXiv:2004.12088,2020. [12]VEPAKOMMA P,RASKAR R.Split Learning:A Resource Efficient Model and Data Parallel Approach for Distributed Deep Learning[M]//Federated Learning.Springer,Cham.2022:439-451. [13]SANDLER M,HOWARD A,ZHU M,et al.MobileNetV2:In-verted Residuals and Linear Bottlenecks[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2018:4510-4520. [14]FANG W.MOBILENET:US13153290[P].[2024-07-30].DOI:US20120309352 A1. [15]IZKIKEVICH M E.Simple model of spiking neurons[J].IEEE Transactions on Neural Networks,2003,14 (6):1569-1572. [16]WANG J Z,KONG L W,HUANG X C,et al.Research review of federated learning algorithms[J].Big Data,2020,6(6):64-82. [17]MCMAHAN H B,MOORE E,RAMAGE D,et al.Communication-efficient learning of deep networks from decentralized data[J].arXiv:1602.05629v3,2017. [18]LI T,SAHU A K,ZAHEER M,et al.Federated optimization in heterogeneous networks[J].Proceedings of Machine Learning and Systems,2020,2:429-450. [19]KARIMIREDDY P S,KALE S,MOHRI M,et al.SCAFFOLD:Stochastic Controlled Averaging for On-Device Federated Lear-ning[J].arXiv:1910.06378v4,2019. [20]SHOHAM N,AVIDOR T,KEREN A,et al.Overcoming forgetting in federated learning on non-iid data[J].arXiv:1910.07796,2019. [21]ZHANG P C,WEI X M,JIN H Y.Dynamic QoS Optimization Method Based on Federal Learning in Mobile Edge Computing[J].Chinese Journal of Computers,2021,44(12):2431-2446. [22]GUO Y T,LIU F,CAI Z P,et al.PREFER:Point-of-interest Recommendation with efficiency and privacy-preservation via Federated Edge leaRning[J].Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,2021,5(1):1-25. [23]FENG J,RONG C,SUN F,et al.PMF:A Privacy-preservingHuman Mobility Prediction Framework via Federated Learning [J].Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,2020,4 (1):1-21. [24]YE D D,YU R,PAN M,et al.Federated learning in vehicular edge computing:A selective model aggregation approach[J].IEEE Access,2020,8:23920-23935. [25]KIRKPATRICK J,PASCANU R,RABINOWITZ N,et al.Overcoming catastrophic forgetting in neural networks[J].PNAS,2017,114(13):3521-3526. [26]YANG C,ZHU M L,LIU Y F,et al.FedPD:Federated Open Set Recognition with Parameter Disentanglement[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2023:4882-4891. [27]MORI J,TERANISHI I,FURUKAWA R.Continual Horizontal Federated Learning for Heterogeneous Data[J].arXiv:2203.02108,2022. [28]PONULAK F,KASINSKI A.Introduction to spiking neuralnetworks:Information processing,learning and applications[J].ActaNeurobiologiae Experimentalis,2011,71(4):409-433. [29]TAHERKHANI A,BELATRECHE A,LI Y,et al.A review of learning in biologically plausible spiking neural networks[J].Neural Networks,2020,122:253-272. [30]ZHANG T L,XU B.Research Advances and Perspectives onSpiking Neural Networks[J].Chinese Journal of Computers,2021,44(9):1767-1785. [31]HEBB D O.The organization of behavior:A neuropsychological theory[M].New York:John Wiley and Sons,1949:62. [32]MARKRAM H,GERSTNER W,SJÖSTRÖM P J.A history of spike-timing-dependent plasticity[J].Frontiers in Synaptic Neuroscience,2011,3:4. [33]SENGUPTA A,YE Y T,WANG R,et al.Going deeper in spiking neural networks:VGG and residual architectures [J].Frontiers in Neuroscience,2019,13:95. [34]WANG Y X,XU Y,YAN R,et al.Deep spiking neural networkswith binary weights for object recognition [J].IEEE Transactions on Cognitive and Developmental Systems,2021,13(3):514-523. [35]HU Y F,TANG H J,PAN G.Spiking deep residual networks[J].IEEE Transactions on Neural Networks and Learning Systems,2023,34(8):5200-5205. [36]WU Y J,DENG L,LI G Q,et al.Spatio-temporal backpropagation for training high-performance spiking neural networks [J].Frontiers in Neuroscience,2018,12:331. [37]MOSTAFA H.Supervised learning based on temporal coding inspiking neural networks [J].IEEE Transactions on Neural Networks and Learning Systems,2018,29(7):3227-3235. [38]YANG H,LAM K Y,XIAO L,et al.Lead federated neuromorphic learning for wireless edge artificial intelligence[J].Nature Communications,2022,13(1):4269. [39]SOURES N,HELFER P,DARAM A,et al.Tacos:task agnostic continual learning in spiking neural networks[C]//Theory and Foundation of Continual Learning Workshop at ICML’2021.2021. [40]REBUFFI S A,KOLESNIKOV A,SPERL G,et al.ICARL:Incremental classifier and representation learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:2001-2010. [41]LECUN Y,BOTTOU L,BENGIO Y,et al.Gradient-basedlearning applied to document recognition[J].Proceedings of the IEEE,1998,86(11):2278-2324. [42]XIAO H,RASUL K,VOLLGRAF R.Fashion-MNIST:A Novel Image Dataset for Benchmarking Machine Learning Algorithms[J].arXiv:1708.07747,2017. |
|