计算机科学 ›› 2023, Vol. 50 ›› Issue (4): 333-342.doi: 10.11896/jsjkx.220300033

• 信息安全 • 上一篇    下一篇

基于改进NSGA-III的多目标联邦学习进化算法

钟佳淋, 吴亚辉, 邓苏, 周浩浩, 马武彬   

  1. 国防科技大学信息系统工程重点实验室 长沙 410073
  • 收稿日期:2022-03-04 修回日期:2022-04-22 出版日期:2023-04-15 发布日期:2023-04-06
  • 通讯作者: 马武彬(wb_ma@nudt.edu.cn)
  • 作者简介:(zhongjialin20@nudt.edu.cn)
  • 基金资助:
    国家自然科学基金面上项目(61871388)

Multi-objective Federated Learning Evolutionary Algorithm Based on Improved NSGA-III

ZHONG Jialin, WU Yahui, DENG Su, ZHOU Haohao, MA Wubin   

  1. Science and Technology on Information System Engineering Laboratory,National University of Defense Technology,Changsha 410073,China
  • Received:2022-03-04 Revised:2022-04-22 Online:2023-04-15 Published:2023-04-06
  • About author:ZHONG Jialin,born in 1998,postgra-duate.Her main research interests include federated learning and so on.
    MA Wubin,born in 1986,Ph.D,asso-ciate researcher.His main research inte-rests include multi-objective optimization,micro service and data mining.
  • Supported by:
    General Program of National Natural Science Foundation of China(61871388).

摘要: 联邦学习技术能在一定程度上解决数据孤岛和隐私泄露的问题,但存在通信成本高、通信不稳定、参与者性能分布不均衡等缺点。为了改进这些缺点并实现模型有效性、公平性和通信成本的均衡,提出了一种面向联邦学习多目标优化的改进NSGA-III算法。首先构建联邦学习多目标优化模型,以最大化全局模型准确率、最小化全局模型准确率分布方差和通信成本为目标,提出了基于快速贪婪初始化的改进NSGA-III算法,提高了NSGA-III对于联邦学习多目标优化的效率。实验结果表明,相比经典多目标进化算法,提出的优化方法能得到较优Pareto解;与标准的联邦模型相比,优化的模型能在保证全局模型准确率的情况下,有效降低通信成本和全局模型准确率分布方差。

关键词: 联邦学习, 多目标均衡, NSGA-III算法, 多目标进化, 参数优化

Abstract: Federated learning technology solves the problems of data islands and privacy leakage to a certain extent.However it has shortcomings such as high communication cost,unstable communication,and uneven distribution of participant performance.In order to overcome these shortcomings and achieve a balance between model effectiveness,fairness,and communication costs,an improved NSGA-III algorithm for multi-objective optimization of federated learning is proposed.First,a federated learning multi-objective optimization model is constructed to maximize the accuracy of the global model,minimize the variance of the global mo-del accuracy distribution and minimize the communication cost of participant,and an improved NSGA-III algorithm based on fast greedy initialization is proposed,which improves the efficiency of NSGA-III for multi-objective optimization of federated learning.Experimental results show that the proposed optimization method can obtain a better Pareto solution than the classical multi-objective evolutionary algorithm.Compared with the standard model of federated learning,the optimized model can effectively lower the communication cost and the variance of the global model accuracy distribution while ensuring the accuracy of the global model.

Key words: Federated learning, Multi-objective equilibrium, Non-dominated sorted genetic algorithm-III (NSGA-III), Multi-objective optimization, Parameters optimization

中图分类号: 

  • TP301
[1]REGULATION G D P.Regulation EU 2016/679 of the Euro-pean Parliament and of the Council of 27 April 2016[J].Official Journal of the European Union,2016(59):1-88.
[2]SUN Y H.Cybersecurity Law:The Fundamental Measures toEnsure Cybersecurity——Study and implement the “Network Security Law of the People’s Republic of China”[J].China Information Security,2016(12):30-33.
[3]MCMAHAN B,MOORE E,RAMAGE D,et al.Communica-tion-efficient learning of deep networks from decentralized data[C]//Artificial Intelligence and Statistics.PMLR,2017:1273-1282.
[4]LI L,FAN Y,TSE M,et al.A review of applications in federated learning[J].Computers & Industrial Engineering,2020,149(5):106854.
[5]CHEN Y,SUN X,JIN Y.Communication-efficient federateddeep learning with layerwise asynchronous model update and temporally weighted aggregation[J].IEEE Transactions on Neural Networks and Learning Systems,2019,31(10):4229-4238.
[6]ZHU H,JIN Y.Multi-objective evolutionary federated learning[J].IEEE Transactions on Neural Networks and Learning Systems,2019,31(4):1310-1322.
[7]MOCANU D C,MOCANU E,STONE P,et al.Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science[J].Nature Communications,2018,9(1):1-12.
[8]HAO M,LI H,LUO X,et al.Efficient and privacy-enhancedfederated learning for industrial artificial intelligence[J].IEEE Transactions on Industrial Informatics,2019,16(10):6532-6542.
[9]KANG J,XIONG Z,NIYATO D,et al.Incentive design for efficient federated learning in mobile networks:A contract theory approach[C]//2019 IEEE VTS Asia Pacific Wireless Communications Symposium.IEEE,2019:1-5.
[10]LI T,SANJABI M,BEIRAMI A,et al.Fair resource allocation in federated learning[J].arXiv:1905.10497,2019.
[11]QOLOMANY B,AHMAD K,AL-FUQAHA A,et al.Particle swarm optimized federated learning for industrial IoT and smart city services[C]//2020 IEEE Global Communications Confe-rence(GLOBECOM 2020).IEEE,2020:1-6.
[12]DEB K,PRATAP A,AGARWAL S,et al.A fast and elitistmulti-objective genetic algorithm:NSGA-II[J].IEEE Transactions on Evolutionary Computation,2002,6(2):182-197.
[13]ZHANG Q,LI H.MOEA/D:A multi-objective evolutionary algorithm based on decomposition[J].IEEE Transactions on evolutionary computation,2007,11(6):712-731.
[14]ZITZLER E,LAUMANNS M,THIELE L.SPEA2:Improving the strength Pareto evolutionary algorithm:TIK-Report 103[R].Swiss:Swiss Federal Institute of Technology,2001.
[15]KNOWLES J,CORNE D.The pareto archived evolution strategy:A new baseline algorithm for pareto multi-objective optimization[C]//Proceedings of the 1999 Congress on Evolutionary Computation-CEC99.IEEE,1999:98-105.
[16]DEB K,JAIN H.An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach,part I:solving problems with box constraints[J].IEEE Transactions on Evolutionary Computation,2013,18(4):577-601.
[17]ERDOS P,RÉNYI A.On the evolution of random graphs[J].Publications of the Mathematical Institute of the Hungarian Academy of Sciences,1960,5(1):17-60.
[18]LECUN Y,CORTES C,BURGES C J.MNIST handwritten di-git database[J/OL].http://yann.lecun.com/exdb/mnist.
[19]KRIZHEVSKY A,HINTON G.Learning multiple layers of features from tiny images[J/OL].http://www.cs.toronto.edu/~kriz/cifar.html.
[20]ZITZLER E,THIELE L.Multi-objective evolutionary algo-rithms:a comparative case study and the strength Pareto approach[J].IEEE Transactions on Evolutionary Computation,1999,3(4):257-271.
[21]ZITZLER E,THIELE L.Multi-objective optimization using evolutionary algorithms——a comparative case study[C]//International Conference on Parallel Problem Solving From Nature.Berlin:Springer,1998:292-301.
[22]LI T,SAHU A K,ZAHEER M,et al.Federated optimization in heterogeneous networks[C]//Proceedings of Machine Learning and Systems.2020:429-450.
[1] 胡中源, 薛羽, 查加杰.
演化循环神经网络研究综述
Survey on Evolutionary Recurrent Neural Networks
计算机科学, 2023, 50(3): 254-265. https://doi.org/10.11896/jsjkx.220600007
[2] 鲁晨阳, 邓苏, 马武彬, 吴亚辉, 周浩浩.
基于分层抽样优化的面向异构客户端的联邦学习
Federated Learning Based on Stratified Sampling Optimization for Heterogeneous Clients
计算机科学, 2022, 49(9): 183-193. https://doi.org/10.11896/jsjkx.220500263
[3] 汤凌韬, 王迪, 张鲁飞, 刘盛云.
基于安全多方计算和差分隐私的联邦学习方案
Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy
计算机科学, 2022, 49(9): 297-305. https://doi.org/10.11896/jsjkx.210800108
[4] 王兵, 吴洪亮, 牛新征.
基于改进势场法的机器人路径规划
Robot Path Planning Based on Improved Potential Field Method
计算机科学, 2022, 49(7): 196-203. https://doi.org/10.11896/jsjkx.210500020
[5] 陈明鑫, 张钧波, 李天瑞.
联邦学习攻防研究综述
Survey on Attacks and Defenses in Federated Learning
计算机科学, 2022, 49(7): 310-323. https://doi.org/10.11896/jsjkx.211000079
[6] 鲁晨阳, 邓苏, 马武彬, 吴亚辉, 周浩浩.
基于DBSCAN聚类的集群联邦学习方法
Clustered Federated Learning Methods Based on DBSCAN Clustering
计算机科学, 2022, 49(6A): 232-237. https://doi.org/10.11896/jsjkx.211100059
[7] 闫萌, 林英, 聂志深, 曹一凡, 皮欢, 张兰.
一种提高联邦学习模型鲁棒性的训练方法
Training Method to Improve Robustness of Federated Learning
计算机科学, 2022, 49(6A): 496-501. https://doi.org/10.11896/jsjkx.210400298
[8] 杜辉, 李卓, 陈昕.
基于在线双边拍卖的分层联邦学习激励机制
Incentive Mechanism for Hierarchical Federated Learning Based on Online Double Auction
计算机科学, 2022, 49(3): 23-30. https://doi.org/10.11896/jsjkx.210800051
[9] 王鑫, 周泽宝, 余芸, 陈禹旭, 任昊文, 蒋一波, 孙凌云.
一种面向电能量数据的联邦学习可靠性激励机制
Reliable Incentive Mechanism for Federated Learning of Electric Metering Data
计算机科学, 2022, 49(3): 31-38. https://doi.org/10.11896/jsjkx.210700195
[10] 赵罗成, 屈志昊, 谢在鹏.
面向多层无线边缘环境下的联邦学习通信优化的研究
Study on Communication Optimization of Federated Learning in Multi-layer Wireless Edge Environment
计算机科学, 2022, 49(3): 39-45. https://doi.org/10.11896/jsjkx.210800054
[11] 邹赛兰, 李卓, 陈昕.
面向分层联邦学习的传输优化研究
Study on Transmission Optimization for Hierarchical Federated Learning
计算机科学, 2022, 49(12): 5-16. https://doi.org/10.11896/jsjkx.220300204
[12] 申圳, 赵成贵.
去中心化云存储网络的存储任务分配算法
Storage Task Allocation Algorithm in Decentralized Cloud Storage Network
计算机科学, 2022, 49(12): 17-21. https://doi.org/10.11896/jsjkx.220700131
[13] 杨鸿健, 胡学先, 李可佳, 徐阳, 魏江宏.
隐私保护的非线性联邦支持向量机研究
Study on Privacy-preserving Nonlinear Federated Support Vector Machines
计算机科学, 2022, 49(12): 22-32. https://doi.org/10.11896/jsjkx.220500240
[14] 瞿祥谋, 吴映波, 蒋晓玲.
一种非独立同分布问题下的联邦数据增强算法
Federated Data Augmentation Algorithm for Non-independent and Identical Distributed Data
计算机科学, 2022, 49(12): 33-39. https://doi.org/10.11896/jsjkx.220300031
[15] 郭桂娟, 田晖, 王田, 贾维嘉.
一种基于背景优化的高效联邦学习方案
Efficient Federated Learning Scheme Based on Background Optimization
计算机科学, 2022, 49(12): 40-45. https://doi.org/10.11896/jsjkx.220600237
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!