计算机科学 ›› 2025, Vol. 52 ›› Issue (3): 326-337.doi: 10.11896/jsjkx.240900070

• 计算机网络 • 上一篇    下一篇

基于类脑脉冲神经网络的边缘联邦持续学习方法

王冬芝1, 刘琰1, 郭斌1, 於志文1,2   

  1. 1 西北工业大学计算机学院 西安 710072
    2 哈尔滨工程大学 哈尔滨 150001
  • 收稿日期:2024-09-11 修回日期:2024-11-02 出版日期:2025-03-15 发布日期:2025-03-07
  • 通讯作者: 郭斌(guob@nwpu.edu.cn)
  • 作者简介:(dzhiwang@mail.nwpu.edu.cn)
  • 基金资助:
    国家杰出青年科学基金(62025205);国家自然科学基金(62032020,62302017)

Edge-side Federated Continuous Learning Method Based on Brain-like Spiking Neural Networks

WANG Dongzhi1, LIU Yan1, GUO Bin1, YU Zhiwen1,2   

  1. 1 College of Computer Science,Northwestern Polytechnical University,Xi’an 710072,China
    2 Harbin Engineering University,Harbin 150001,China
  • Received:2024-09-11 Revised:2024-11-02 Online:2025-03-15 Published:2025-03-07
  • About author:WANG Dongzhi,born in 2002,postgraduate.Her main research interests include ubiquitous computing and mobile crowd sensing.
    GUO Bin,born in 1980,Ph.D,Ph.D supervisor,is a member of CCF(No.E200019107S).His main research interests include ubiquitous computing and mobile crowd sensing.
  • Supported by:
    National Science Fund for Distinguished Young Scholars of China(62025205) and National Natural Science Foundation of China(62032020,62302017).

摘要: 移动边缘计算因具有通信成本低、服务响应快等优势,已经成为适应智能物联网应用需求的重要计算模式。在实际应用场景中,一方面,单一设备能够获取到的数据通常有限;另一方面,边缘计算环境通常是动态多变的。针对以上问题,主要对边缘联邦持续学习展开研究,将脉冲神经网络(SNN)创新性地引入到边缘联邦持续学习框架中,在降低设备计算和通信资源消耗的同时,解决本地设备在动态边缘环境中所面临的灾难性遗忘问题。利用SNN解决边缘联邦持续学习问题主要面临两个方面的挑战:首先,传统脉冲神经网络没有考虑持续增加的输入数据,难以在较长的时间跨度内存储和更新知识,导致无法实现有效的持续学习;其次,不同设备学习到的SNN模型存在差异,通过传统联邦聚合获得的全局模型无法在每个边缘设备上取得较好的性能。因此,提出了一种新的脉冲神经网络增强的边缘联邦持续学习(SNN-Enhanced Edge-FCL)方法。针对挑战一,提出了面向边缘设备的类脑持续学习算法,在单个设备上采用类脑脉冲神经网络进行本地训练,同时采用基于羊群效应的样本选择策略保存历史任务的代表样本;针对挑战二,提出了多设备协同的全局自适应聚合算法,基于SNN工作原理设计脉冲数据质量指标,并利用数据驱动的动态加权聚合方法,在全局模型聚合时对不同设备模型赋予相应权重以提升全局模型的泛化性。实验结果表明,相比基于传统神经网络的边缘联邦持续学习方法,SNN-Enhanced Edge-FCL方法在边缘设备上消耗的通信资源和计算资源减少了92%,且边缘设备在测试集上5个连续任务中的准确率都在87%以上。

关键词: 移动边缘计算, 资源受限, 灾难性遗忘, 联邦学习, 持续学习, 类脑脉冲神经网络

Abstract: Mobile edge computing has become an important computing model adapted to the needs of smart IoT applications,with advantages such as low communication cost and fast service response.In practical application scenarios,on the one hand,the data acquired by a single device is usually limited;on the other hand,the edge computing environment is usually dynamic and variable.Aiming at the above problems,this paper focuses on edge federated continuous learning,which innovatively introduces spiking neural networks (SNNs) into the edge federated continuous learning framework to solve the catastrophic forgetting problem faced by local devices in dynamic edge environments while reducing the consumption of device computation and communication resources.The use of SNNs to solve the edge federated continuous learning problem faces two main challenges.First,traditional spiking neural networks do not take into account the continuously increasing input data,and it is difficult to store and update the knowledge over a long time span,which results in the inability to realize effective continuous learning.Second,there are variations in the SNN models learned by different devices,and the global model obtained by traditional federated aggregation fails to achieve a better performance on each edge device achieve better performance on each edge device.Therefore,a new spiking neural network-enhanced edge federation continuous learning (SNN-Enhanced Edge-FCL) method is proposed.To address challenge I,a brain-like continuous learning algorithm for edge devices is proposed,which employs a brain-like spiking neural network for local training on a single device,and at the same time adopts a sample selection strategy based on the flocking effect to save representative samples of historical tasks.To address challenge II,a global adaptive aggregation algorithm with multi-device collaboration is proposed.Based on the working principle of SNN,the spiking data quality index is designed,and through the data-driven dynamic weighted aggregation method to assign corresponding weights to different device models to enhance the generalization of the glo-bal model when the global model is aggregated.The experimental results show that compared with the edge federation continuous learning method based on traditional neural networks,the communication and computational resources consumed by the proposed method on the edge devices are reduced by 92%,and the accuracy of the edge devices on the test set for five continuous tasks is above 87%.

Key words: Mobile edge computing, Resource constrained, Catastrophic forgetting, Federated learning, Continual learning, Brain-like spiking neural networks

中图分类号: 

  • TP183
[1]SATYANARAYANAN A,VICTOR BAHL V,CÁCERES R,et al.The Case for VM-based Cloudlets in Mobile Computing[J].IEEE Pervasive Computing,2009,8(4):14-23.
[2]ZHOU C X,SUN Y,WANG D G,et al.Survey of federatedlearning research[J].Chinese Journal of Network and Information Security,2021,7(5):77-92.
[3]ZHOU Z H.Learnware:on the future of machine learning[J].Frontiers of Computer Science,2016,10(4):589-590.
[4]ZHANG Y J,YAN Y H,ZHAO P,et al.Towards enablinglearnware to handle unseen jobs[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2021,35(12):10964-10972.
[5]MCCLOSKEY M,COHEN N J.Catastrophic interference inconnectionist networks:The sequential learning problem[M]//Psychology ofLearning and Motivation.Academic Press,1989,24:109-165.
[6]MCCLELLAND J L,MCNAUGHTON B L,O'REILLY R C.Why there are complementary learning systems in the hippocampus and neocortex:insights from the successes and failures of connectionist models of learning and memory[J].Psychological Review,1995,102(3):419.
[7]YOON J,JEONG W,LEE G,et al.Federated continual learning with weighted inter-client transfer[C]//International Confe-rence on Machine Learning.PMLR,2021:12073-12086.
[8]DONG J H,WANG L X,FANG Z,et al.Federated class-incremental learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.2022:10164-10173.
[9]VADERA S,AMEEN S.Methods for Pruning Deep Neural Networks[J].arXiv:2011.00241,2020.
[10]MOLCHANOV P,TYREE S,KARRAS T,et al.Pruning Con-volutional Neural Networks for Resource Efficient Transfer Learning[J].arXiv:1611.06440,2016.
[11]THAPA C,CHAMIKARA M A P,CAMTEPE S.SplitFed:When Federated Learning Meets Split Learning[J].arXiv:2004.12088,2020.
[12]VEPAKOMMA P,RASKAR R.Split Learning:A Resource Efficient Model and Data Parallel Approach for Distributed Deep Learning[M]//Federated Learning.Springer,Cham.2022:439-451.
[13]SANDLER M,HOWARD A,ZHU M,et al.MobileNetV2:In-verted Residuals and Linear Bottlenecks[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.2018:4510-4520.
[14]FANG W.MOBILENET:US13153290[P].[2024-07-30].DOI:US20120309352 A1.
[15]IZKIKEVICH M E.Simple model of spiking neurons[J].IEEE Transactions on Neural Networks,2003,14 (6):1569-1572.
[16]WANG J Z,KONG L W,HUANG X C,et al.Research review of federated learning algorithms[J].Big Data,2020,6(6):64-82.
[17]MCMAHAN H B,MOORE E,RAMAGE D,et al.Communication-efficient learning of deep networks from decentralized data[J].arXiv:1602.05629v3,2017.
[18]LI T,SAHU A K,ZAHEER M,et al.Federated optimization in heterogeneous networks[J].Proceedings of Machine Learning and Systems,2020,2:429-450.
[19]KARIMIREDDY P S,KALE S,MOHRI M,et al.SCAFFOLD:Stochastic Controlled Averaging for On-Device Federated Lear-ning[J].arXiv:1910.06378v4,2019.
[20]SHOHAM N,AVIDOR T,KEREN A,et al.Overcoming forgetting in federated learning on non-iid data[J].arXiv:1910.07796,2019.
[21]ZHANG P C,WEI X M,JIN H Y.Dynamic QoS Optimization Method Based on Federal Learning in Mobile Edge Computing[J].Chinese Journal of Computers,2021,44(12):2431-2446.
[22]GUO Y T,LIU F,CAI Z P,et al.PREFER:Point-of-interest Recommendation with efficiency and privacy-preservation via Federated Edge leaRning[J].Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,2021,5(1):1-25.
[23]FENG J,RONG C,SUN F,et al.PMF:A Privacy-preservingHuman Mobility Prediction Framework via Federated Learning [J].Proceedings of the ACM on Interactive,Mobile,Wearable and Ubiquitous Technologies,2020,4 (1):1-21.
[24]YE D D,YU R,PAN M,et al.Federated learning in vehicular edge computing:A selective model aggregation approach[J].IEEE Access,2020,8:23920-23935.
[25]KIRKPATRICK J,PASCANU R,RABINOWITZ N,et al.Overcoming catastrophic forgetting in neural networks[J].PNAS,2017,114(13):3521-3526.
[26]YANG C,ZHU M L,LIU Y F,et al.FedPD:Federated Open Set Recognition with Parameter Disentanglement[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision.2023:4882-4891.
[27]MORI J,TERANISHI I,FURUKAWA R.Continual Horizontal Federated Learning for Heterogeneous Data[J].arXiv:2203.02108,2022.
[28]PONULAK F,KASINSKI A.Introduction to spiking neuralnetworks:Information processing,learning and applications[J].ActaNeurobiologiae Experimentalis,2011,71(4):409-433.
[29]TAHERKHANI A,BELATRECHE A,LI Y,et al.A review of learning in biologically plausible spiking neural networks[J].Neural Networks,2020,122:253-272.
[30]ZHANG T L,XU B.Research Advances and Perspectives onSpiking Neural Networks[J].Chinese Journal of Computers,2021,44(9):1767-1785.
[31]HEBB D O.The organization of behavior:A neuropsychological theory[M].New York:John Wiley and Sons,1949:62.
[32]MARKRAM H,GERSTNER W,SJÖSTRÖM P J.A history of spike-timing-dependent plasticity[J].Frontiers in Synaptic Neuroscience,2011,3:4.
[33]SENGUPTA A,YE Y T,WANG R,et al.Going deeper in spiking neural networks:VGG and residual architectures [J].Frontiers in Neuroscience,2019,13:95.
[34]WANG Y X,XU Y,YAN R,et al.Deep spiking neural networkswith binary weights for object recognition [J].IEEE Transactions on Cognitive and Developmental Systems,2021,13(3):514-523.
[35]HU Y F,TANG H J,PAN G.Spiking deep residual networks[J].IEEE Transactions on Neural Networks and Learning Systems,2023,34(8):5200-5205.
[36]WU Y J,DENG L,LI G Q,et al.Spatio-temporal backpropagation for training high-performance spiking neural networks [J].Frontiers in Neuroscience,2018,12:331.
[37]MOSTAFA H.Supervised learning based on temporal coding inspiking neural networks [J].IEEE Transactions on Neural Networks and Learning Systems,2018,29(7):3227-3235.
[38]YANG H,LAM K Y,XIAO L,et al.Lead federated neuromorphic learning for wireless edge artificial intelligence[J].Nature Communications,2022,13(1):4269.
[39]SOURES N,HELFER P,DARAM A,et al.Tacos:task agnostic continual learning in spiking neural networks[C]//Theory and Foundation of Continual Learning Workshop at ICML’2021.2021.
[40]REBUFFI S A,KOLESNIKOV A,SPERL G,et al.ICARL:Incremental classifier and representation learning[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:2001-2010.
[41]LECUN Y,BOTTOU L,BENGIO Y,et al.Gradient-basedlearning applied to document recognition[J].Proceedings of the IEEE,1998,86(11):2278-2324.
[42]XIAO H,RASUL K,VOLLGRAF R.Fashion-MNIST:A Novel Image Dataset for Benchmarking Machine Learning Algorithms[J].arXiv:1708.07747,2017.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!