Computer Science ›› 2025, Vol. 52 ›› Issue (12): 314-320.doi: 10.11896/jsjkx.241100085

• Information Security • Previous Articles     Next Articles

Decentralized Federated Learning Algorithm Sensitive to Delay

PENG Jiao1, CHANG Yongjuan1, YAN Tao2, YOU Zhangzheng2, SONG Meina2, ZHU Yifan2, ZHANG Pengfei1, HE Yue1, ZHANG Bo1, OU Zhonghong3   

  1. 1 State Grid Hebei Information and Telecommunication Branch, Shijiazhuang 050000, China
    2 School of Computer Science(National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing 100876, China
    3 State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China
  • Received:2024-11-13 Revised:2025-02-21 Online:2025-12-15 Published:2025-12-09
  • About author:PENG Jiao,born in 1991,master,engineer.Her main research interests include NLP,image processing and big data analysis.
    OU Zhonghong,born in 1982,Ph.D,professor,Ph.D supervisor,is a senior member of CCF(No.69730S).His main research interests include few shot learning,cross domain adaptation and small object detection.
  • Supported by:
    This work was supported by the State Grid Hebei Information and Telecommunication(SGHEXT00SJJS2310134).

Abstract: In recent years,the rapid development of deep learning,mobile devices,and IoT technology has led to a surge in demand for model inference and data storage on edge devices.Traditional centralized model training methods are limited by datavo-lume,communication bandwidth,and user data privacy issues and cannot effectively address the new challenges.Therefore,federated learning technology is born.Federated learning allows edge devices to train models based on local data and upload model parameters to a central server for aggregation and distribution,ensuring that joint modeling can be performed without data leaving the trusted domain of each party.Furthermore,distributed federated learning has been developed to overcome issues such as latency,bandwidth limitations,and single point of failure risks.However,the training efficiency of federated learning is severely affected by real-world network delay and bandwidth factors,making multi-party joint modeling difficult.To address this issue,this paper proposes a decentralized federated learning algorithm DBFedAvg that dynamically selects nodes with lower average delay as the main nodes to reduce communication costs and improve global model training performance,accelerating model convergence.Experimental results on the Sprint network and other scenarios have validated that the proposed method brings significant improvements in communication costs and model convergence.

Key words: Federated learning, Decentralized, Real network environment, Delay-based, Communication cost

CLC Number: 

  • TP391
[1]MCMAHAN B,MOORE E,RAMAGE D,et al.Communica-tion-efficient learning of deep networks from decentralized data[C]//Artificial Intelligence and Statistics.PMLR,2017:1273-1282.
[2]VANHAESEBROUCK P,BELLET A,TOMMASI M.Decentralized collaborative learning of personalized models over networks[C]//Artificial Intelligence and Statistics.PMLR,2017:509-517.
[3]WARNAT-HERRESTHAL S,SCHULTZE H,SHASTRY KL,et al.Swarm learning for decentralized and confidential clinical machine learning[J].Nature,2021,594(7862):265-270.
[4]LI T,SAHU A K,ZAHEER M,et al.Federated optimization in heterogeneous networks[C]//Proceedings of Machine Learning and Systems.2020:429-450.
[5]KARIMIREDDY S P,KALE S,MOHRI M,et al.Scaffold:Sto-chastic controlled averaging for federated learning[C]//International conference on machine learning.PMLR,2020:5132-5143.
[6]JIANG P,AGRAWAL G.A linear speedup analysis of distributed deep learning with sparse and quantized communication[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems.Red Hook,NY:Curran Associates Inc.,2018:2530-2541.
[7]YU H,YANG S,ZHU S.Parallel restarted SGD with fasterconvergence and less communication:Demystifying why model averaging works for deep learning[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2019:5693-5700.
[8]BASU D,DATA D,KARAKUS C,et al.Qsparse-local-SGD:Distributed SGD with quantization,sparsification and local computations[J].IEEE Journal on Selected Areas in Information Theory,2020,1(1):217-226.
[9]TANG H,YU C,LIAN X,et al.Doublesqueeze:Parallel stochastic gradient descent with double-pass error-compensated compression[C]//International Conference on Machine Learning.PMLR,2019:6155-6165.
[10]KUO T T,OHNO-MACHADO L.Modelchain:Decentralizedprivacy-preserving healthcare predictive modeling framework on private blockchain networks[J].arXiv:1802.01746,2018.
[11]WENG J,WENG J,ZHANG J,et al.Deepchain:Auditable and privacy-preserving deep learning with blockchain-based incentive[J].IEEE Transactions on Dependable and Secure Computing,2019,18(5):2438-2455.
[12]TRAN A T,LUONG T D,KARNJANA J,et al.An efficientapproach for privacy preserving decentralized deep learning models based on secure multi-party computation[J].Neurocomputing,2021,422:245-262.
[13]ROY A G,SIDDIQUI S,PÖLSTERL S,et al.Braintorrent:A peer-to-peer environment for decentralized federated learning[J].arXiv:1905.06731,2019.
[14]WANG J,SAHU A K,YANG Z,et al.MATCHA:Speeding up decentralized SGD via matching decomposition sampling[C]//2019 Sixth Indian Control Conference(ICC).IEEE,2019:299-300.
[15]LALITHA A,KILINC O C,JAVIDI T,et al.Peer-to-peer federated learning on graphs[J].arXiv:1901.11173,2019.
[16]HE C,CEYANI E,BALASUBRAMANIAN K,et al.Spre-adgnn:Serverless multi-task federated learning for graph neural networks[J].arXiv:2106.02743,2021.
[17]HEGEDŰS I,DANNER G,JELASITY M.Gossip learning as a decentralized alternative to federated learning[C]//IFIP International Conference on Distributed Applications and Interoperable Systems.Cham:Springer,2019:74-90.
[18]TIAN C,LI L,TAM K,et al.Breaking the Memory Wall for Heterogeneous Federated Learning via Model Splitting[J].IEEE Transactions on Parallel and Distributed Systems,2024,35(12):2513-2526.
[19]ZHANG Y,BEHNIA R,YAVUZ A A,et al.Uncovering At-tacks and Defenses in Secure Aggregation for Federated Deep Learning[J].arXiv:2410.09676,2024.
[20]OlivierTilmans.IPMininet’s documentation! [EB/OL](2022-05-28)[2024-03-01].https://ipmininet.readthedocs.io/en/latest/.
[21]DE OLIVEIRA R L S,SCHWEITZER C M,SHINODA A A,et al.Using mininet for emulation and prototyping software-defined networks[C]//2014 IEEE Colombian conference on communications and computing(COLCOM).IEEE,2014:1-6.
[22]MILLS J,HU J,MIN G.Communication-efficient federatedlearning for wireless edge intelligence in IoT[J].IEEE Internet of Things Journal,2019,7(7):5986-5994.
[23]KRIZHEVSKY A,HINTON G.Learning Multiple Layers ofFeatures from Tiny Images[EB/OL].http://www.cs.utoronto.ca/~kriz/learning-features-2009-TR.pdf.
[1] WU Jiagao, YI Jing, ZHOU Zehui, LIU Linfeng. Personalized Federated Learning Framework for Long-tailed Heterogeneous Data [J]. Computer Science, 2025, 52(9): 232-240.
[2] ZHU Shihao, PENG Kexing, MA Tinghuai. Graph Attention-based Grouped Multi-agent Reinforcement Learning Method [J]. Computer Science, 2025, 52(9): 330-336.
[3] WANG Chundong, ZHANG Qinghua, FU Haoran. Federated Learning Privacy Protection Method Combining Dataset Distillation [J]. Computer Science, 2025, 52(6A): 240500132-7.
[4] JIANG Yufei, TIAN Yulong, ZHAO Yanchao. Persistent Backdoor Attack for Federated Learning Based on Trigger Differential Optimization [J]. Computer Science, 2025, 52(4): 343-351.
[5] HU Kangqi, MA Wubin, DAI Chaofan, WU Yahui, ZHOU Haohao. Federated Learning Evolutionary Multi-objective Optimization Algorithm Based on Improved NSGA-III [J]. Computer Science, 2025, 52(3): 152-160.
[6] WANG Ruicong, BIAN Naizheng, WU Yingjun. FedRCD:A Clustering Federated Learning Algorithm Based on Distribution Extraction andCommunity Detection [J]. Computer Science, 2025, 52(3): 188-196.
[7] WANG Dongzhi, LIU Yan, GUO Bin, YU Zhiwen. Edge-side Federated Continuous Learning Method Based on Brain-like Spiking Neural Networks [J]. Computer Science, 2025, 52(3): 326-337.
[8] XIE Jiachen, LIU Bo, LIN Weiwei , ZHENG Jianwen. Survey of Federated Incremental Learning [J]. Computer Science, 2025, 52(3): 377-384.
[9] LUO Zhengquan, WANG Yunlong, WANG Zilei, SUN Zhenan, ZHANG Kunbo. Study on Active Privacy Protection Method in Metaverse Gaze Communication Based on SplitFederated Learning [J]. Computer Science, 2025, 52(3): 95-103.
[10] ZHENG Jianwen, LIU Bo, LIN Weiwei, XIE Jiachen. Survey of Communication Efficiency for Federated Learning [J]. Computer Science, 2025, 52(2): 1-7.
[11] WANG Xin, CHEN Kun, SUN Lingyun. Research on Foundation Model Methods for Addressing Non-IID Issues in Federated Learning [J]. Computer Science, 2025, 52(12): 302-313.
[12] ZHAO Tong, CHEN Xuebin, WANG Liu, JING Zhongrui, ZHONG Qi. Backdoor Attack Method for Federated Learning Based on Knowledge Distillation [J]. Computer Science, 2025, 52(11): 434-443.
[13] DUN Jingbo, LI Zhuo. Survey on Transmission Optimization Technologies for Federated Large Language Model Training [J]. Computer Science, 2025, 52(1): 42-55.
[14] LIU Yuming, DAI Yu, CHEN Gongping. Review of Federated Learning in Medical Image Processing [J]. Computer Science, 2025, 52(1): 183-193.
[15] WANG Xin, XIONG Shubo, SUN Lingyun. Federated Graph Learning:Problems,Methods and Challenges [J]. Computer Science, 2025, 52(1): 362-373.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!