Computer Science ›› 2024, Vol. 51 ›› Issue (11): 356-367.doi: 10.11896/jsjkx.231000158

• Information Security • Previous Articles     Next Articles

PRFL:Privacy-preserving Robust Aggregation Method for Federated Learning

GAO Qi1, SUN Yi1, GAI Xinmao3, WANG Youhe1, YANG Fan1,2   

  1. 1 School of Cryptography Engineering,Information Engineering University,Zhengzhou 450001,China
    2 Unit 61623,Beijing 100036,China
    3 Unit 93216,Beijing 100085,China
  • Received:2023-10-23 Revised:2024-03-29 Online:2024-11-15 Published:2024-11-06
  • About author:GAO Qi,born in 1998,postgraduate.His main research interests include fe-derated learning and privacy protection.
    SUN Yi,born in 1979,Ph.D,associate professor,Ph.D supervisor.Her main research interests include network and information security,and data security exchange.

Abstract: Federated learning allows users to train a model together by exchanging model parameters and can reduce the risk of data leakage.However,studies have found that user privacy information can still be inferred through model parameters,and many studies have proposed model privacy-preserving aggregation methods.Moreover,malicious users can corrupt federated learning aggregation by submitting carefully constructed poisoning models,and with models aggregated under privacy protection,malicious users can implement more hidden poisoning attacks.In order to implement privacy protection while resisting poisoning attacks,a privacy-preserving federated learning robust aggregation method named PRFL is proposed.PRFL can not only effectively defends against poisoning attacks launched by Byzantine users,but also guarantee the privacy of the local model,the accuracy and efficiency of the global model.Specifically,a lightweight model privacy-preserving aggregation method under dual-server architecture is first proposed to achieve the privacy-preserving aggregation of the model,while guaranteeing the accuracy of global model without introducing overhead problems.Then a secret model distance computation method is proposed,which allows both servers to compute model distances without exposing the local model parameters,and poisoning model detection method is designed based on this method and local outlier factor(LOF) algorithm.Finally,security of PRFL is analysed.Experimental results on two real image datasets show that PRFL can obtain similar model accuracy to FedAvg under no attack,and PRFL can effectively defend against three advanced poisoning attacks and outperform existing Krum,Median,and Trimmed mean methods in both the data independent identically distributed(IID) and non-IID settings.

Key words: Federated learning, Privacy protection, Poisoning attack, Robust aggregation, Outlier

CLC Number: 

  • TP391
[1] MCMAHAN B,MOORE E,RAMAGE D,et al.Communica-tion-Efficient Learning of Deep Networks from Decentralized Data[C]//Proceedings of the 2017 International Conference on Artificial Intelligence and Statistics.Brookline:Microtome Publishing,2017:1273-1282.
[2] VOIGT P,BUSSCHE A V D.The Eu General Data Protection Regulation(GDPR)[M].Berlin:Springer,2017:1-383.
[3] KAIROUZ P,MCMAHAN H B,AVENT B,et al.Advancesand Open Problems in Federated Learning[J].Foundations and Trends in Machine Learning,2021,14(1/2):1-210.
[4] MOTHUKURI V,PARIZI R M,POURIYEH S,et al.A Survey on Security and Privacy of Federated Learning[J].Future Ge-neration Computer Systems-the International Journal of Escience,2021,115:619-640.
[5] SHOKRI R,STRONATI M,SONG C Z,et al.Membership Inference Attacks against Machine Learning Models[C]//Proceedings of the 2017 IEEE Symposium on Security and Privacy.New York:IEEE Press,2017:3-18.
[6] ZHU L G,LIU Z J,HAN S.Deep Leakage from Gradients[C]//Proceedings of the 2019 International Conference on Neural Information Processing Systems.Los Angeles:NIPS,2019:1323-1334.
[7] SALEM A,ZHANG Y,HUMBERT M,et al.ML-Leaks:Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models[C]//Proceedings of the 2019 Network and Distributed System Security Symposium.Reston:Internet Society,2019:1-15.
[8] GEIPING J,BAUERMEISTER H,DRÖGE H,et al.Inverting Gradients-How Easy Is It to Break Privacy in Federated Lear-ning?[C]//Proceedings of the 2020 International Conference on NeuralInformation Processing Systems.Los Angeles:NIPS,2020:16937-16947.
[9] MANSOURI M,ÖNEN M,JABALLAH W B,et al.Sok:SecureAggregation Based on Cryptographic Schemes for Federated Learning[J].Proceedings on Privacy Enhancing Technologies,2023,2023(1):140-157.
[10] WEI K,LI J,DING M,et al.Federated Learning with Differential Privacy:Algorithms and Performance Analysis[J].IEEE Transactions on Information Forensics and Security,2020,15:3454-3469.
[11] ZHOU H,YANG G,HUANG Y,et al.Privacy-Preserving and Verifiable Federated Learning Framework for Edge Computing[J].IEEE Transactions on Information Forensics and Security,2023,18:565-580.
[12] STEVENS T,SKALKA C,VINCENT C,et al.Efficient Diffe-rentially Private Secure Aggregation for Federated Learning Via Hardness of Learning with Errors[C]//Proceedings of the 2022 USENIX Security Symposium.Boston:USENIX Association,2022:1379-1395.
[13] MA J,NAAS S A,SIGG S,et al.Privacy-Preserving Federated Learning Based on Multi-Key Homomorphic Encryption[J].International Journal of Intelligent Systems,2022,37(9):5880-5901.
[14] PHONG L T,AONO Y,HAYASHI T,et al.Privacy-Preserving Deep Learning Via Additively Homomorphic Encryption[J].IEEE Transactions on Information Forensics and Security,2018,13(5):1333-1345.
[15] ZHU H,WANG R,JIN Y,et al.Distributed Additive Encryption and Quantization for Privacy Preserving Federated Deep Learning[J].Neurocomputing,2021,463:309-327.
[16] FUNG C,YOON C J M,BESCHASTNIKH I.The Limitations of Federated Learning in Sybil Settings[C]//Proceedings of the 2020 International Symposium on Research in Attacks,Intrusions and Defenses.USENIX Association,2020:301-316.
[17] BLANCHARD P,MHAMDI E M E,GUERRAOUI R,et al.Machine Learning with Adversaries:Byzantine Tolerant Gra-dient Descent[C]//Proceedings of the 2017 International Conference on Neural Information Processing Systems.Los Ange-les:NIPS,2017:118-128.
[18] YIN D,CHEN Y,KANNAN R,et al.Byzantine-Robust Distri-buted Learning:Towards Optimal Statistical Rates[C]//Procee-dings of the 2018 International Conference on Machine Lear-ning.San Diego:JMLR,2018:5650-5659.
[19] SUN Z,KAIROUZ P,SURESH A T,et al.Can You ReallyBackdoor Federated Learning?[J].arXiv:abs/1911.07963,2019.
[20] SO J,GÜLER B,AVESTIMEHR A S.Byzantine-Resilient Se-cure Federated Learning[J].IEEE Journal on Selected Areas in Communications,2021,39(7):2168-2181.
[21] MA Z,MA J,MIAO Y,et al.ShieldFL:Mitigating Model Poi-soning Attacks in Privacy-Preserving Federated Learning[J].IEEE Transactions on Information Forensics and Security,2022,17:1639-1654.
[22] LIU X,LI H,XU G,et al.Privacy-Enhanced Federated Learning against Poisoning Adversaries[J].IEEE Transactions on Information Forensics and Security,2021,16:4574-4588.
[23] NASERI M,HAYES J,DE CRISTOFARO E.Local and Central Differential Privacy for Robustness and Privacy in Federated Learning[C]//Proceedings of the 2022 Network and Distributed System Security Symposium.Reston:Internet Society,2022:1-19.
[24] JEBREEL N M,DOMINGO-FERRER J,BLANCO-JUSTICIA A,et al.Enhanced Security and Privacy Via Fragmented Federated Learning[J].IEEE Transactions on Neural Networks and Learning Systems,2024,35(5):6703-6717.
[25] BREUNIG M M,KRIEGEL H P,NG R T,et al.LOF:Identi-fying Density-Based Local Outliers[C]//Proceedings of the 2000 ACM SIGMOD International Conference on Management of data.New York:Association Computing Machinery,2000:93-104.
[26] LIU Z,GUO J,LAM K Y,et al.Efficient Dropout-Resilient Aggregation for Privacy-Preserving Machine Learning[J].IEEE Transactions on Information Forensics and Security,2023,18:1839-1854.
[27] JAHANI-NEZHAD T,MADDAH-ALI M A,LI S,et al.Swift-Agg:Communication-Efficient and Dropout-Resistant Secure Aggregation for Federated Learning with Worst-Case Security Guarantees[C]//Proceedings of the 2022 IEEE International Symposium on Information Theory(ISIT).Espoo:IEEE,2022:103-108.
[28] FANG M,CAO X,JIA J,et al.Local Model Poisoning Attacks to Byzantine-Robust Federated Learning[C]//Proceedings of the 2020 USENIX Security Symposium.Berkeley:USENIX Asso-ciation,2020:1623-1640.
[29] LI L,XU W,CHEN T,et al.RSA:Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets[C]//Proceedings of the 2019 AAAI Conference on Artificial Intelligence Palo Alto:AAAI Press,2019:1544-1551.
[30] LI T,HU S,BEIRAMI A,et al.Ditto:Fair and Robust Federated Learning through Personalization[C]//Proceedings of the 2021 International Conference on Machine Learning.San Diego:JMLR,2021:6357-6368.
[31] BAGDASARYAN E,VEIT A,HUA Y,et al.How to BackdoorFederated Learning[C]//Proceedings of the 2020 International Conference on Artificial Intelligence and Statistics.Boston:Addison Wesley Publishing Company,2020:2938-2948.
[32] OZDAYI M S,KANTARCIOGLU M,GEL Y R.Defendingagainst Backdoors in Federated Learning with Robust Learning Rate[C]//Proceedings of the 2021 AAAI Conference on Artificial Intelligence.Palo Alto:AAAI Press,2021:9268-9276.
[33] MA X,SUN X,WU Y,et al.Differentially Private Byzantine-Robust Federated Learning[J].IEEE Transactions on Parallel and Distributed Systems,2022,33(12):3690-3701.
[34] CHEN X,YU H,JIA X,et al.APFed:Anti-Poisoning Attacks in Privacy-Preserving Heterogeneous Federated Learning[J].IEEE Transactions on Information Forensics and Security,2023,18:5749-5761.
[35] XIE C,HUANG K,CHEN P Y,et al.DBA:Distributed Back-door Attacks against Federated Learning[C]//Proceedings of the 2020 International Conference on Learning Representations.2020:1-15.
[36] JAGIELSKI M,OPREA A,BIGGIO B,et al.Manipulating Machine Learning:Poisoning Attacks and Countermeasures for Regression Learning[C]//Proceedings of the 2018 IEEE Sympo-sium on Security and Privacy.New York:IEEE Press,2018:19-35.
[37] MOHASSEL P,ZHANG Y.SecureML:A System for ScalablePrivacy-Preserving Machine Learning[C]//Proceedings of the 2017 IEEE Symposium on Security and Privacy.New York:IEEE Press,2017:19-38.
[38] XU G W,LI H W,ZHANG Y,et al.Privacy-Preserving Federated Deep Learning with Irregular Users[J].IEEE Transactions on Dependable and Secure Computing,2022,19(2):1364-1381.
[1] LI Zhi, LIN Sen, ZHANG Qiang. Edge Cloud Computing Approach for Intelligent Fault Detection in Rail Transit [J]. Computer Science, 2024, 51(9): 331-337.
[2] KONG Lingchao, LIU Guozhu. Review of Outlier Detection Algorithms [J]. Computer Science, 2024, 51(8): 20-33.
[3] FU Yanming, ZHANG Siyuan. Privacy Incentive Mechanism for Mobile Crowd-sensing with Comprehensive Scoring [J]. Computer Science, 2024, 51(7): 397-404.
[4] LAN Yajie, MA Ziqiang, CHEN Jiali, MIAO Li, XU Xin. Survey on Application of Searchable Attribute-based Encryption Technology Based on Blockchain [J]. Computer Science, 2024, 51(6A): 230800016-14.
[5] SUN Min, DING Xining, CHENG Qian. Federated Learning Scheme Based on Differential Privacy [J]. Computer Science, 2024, 51(6A): 230600211-6.
[6] TAN Zhiwen, XU Ruzhi, WANG Naiyu, LUO Dan. Differential Privacy Federated Learning Method Based on Knowledge Distillation [J]. Computer Science, 2024, 51(6A): 230600002-8.
[7] LIU Dongqi, ZHANG Qiong, LIANG Haolan, ZHANG Zidong, ZENG Xiangjun. Study on Smart Grid AMI Intrusion Detection Method Based on Federated Learning [J]. Computer Science, 2024, 51(6A): 230700077-8.
[8] WANG Chenzhuo, LU Yanrong, SHEN Jian. Study on Fingerprint Recognition Algorithm for Fairness in Federated Learning [J]. Computer Science, 2024, 51(6A): 230800043-9.
[9] ZANG Hongrui, YANG Tingting, LIU Hongbo, MA Kai. Study on Cryptographic Verification of Distributed Federated Learning for Internet of Things [J]. Computer Science, 2024, 51(6A): 230700217-5.
[10] ZHOU Tianyang, YANG Lei. Study on Client Selection Strategy and Dataset Partition in Federated Learning Basedon Edge TB [J]. Computer Science, 2024, 51(6A): 230800046-6.
[11] LIU Jianxun, ZHANG Xinglin. Federated Learning Client Selection Scheme Based on Time-varying Computing Resources [J]. Computer Science, 2024, 51(6): 354-363.
[12] XU Yicheng, DAI Chaofan, MA Wubin, WU Yahui, ZHOU Haohao, LU Chenyang. Particle Swarm Optimization-based Federated Learning Method for Heterogeneous Data [J]. Computer Science, 2024, 51(6): 391-398.
[13] LU Yanfeng, WU Tao, LIU Chunsheng, YAN Kang, QU Yuben. Survey of UAV-assisted Energy-Efficient Edge Federated Learning [J]. Computer Science, 2024, 51(4): 270-279.
[14] XING Kaiyan, CHEN Wen. Multi-generator Active Learning Algorithm Based on Reverse Label Propagation and ItsApplication in Outlier Detection [J]. Computer Science, 2024, 51(4): 359-365.
[15] WANG Degang, SUN Yi, GAO Qi. Active Membership Inference Attack Method Based on Multiple Redundant Neurons [J]. Computer Science, 2024, 51(4): 373-380.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!