Computer Science ›› 2023, Vol. 50 ›› Issue (11): 356-363.doi: 10.11896/jsjkx.221200005

• Information Security • Previous Articles     Next Articles

Backdoor Defense of Horizontal Federated Learning Based on Random Cutting and GradientClipping

XU Wentao, WANG Binjun   

  1. College of Information and Cyber Security,People's Public Security University of China,Beijing 100038,China
  • Received:2022-12-01 Revised:2023-03-13 Online:2023-11-15 Published:2023-11-06
  • About author:XU Wentao,born in 1999,postgra-duate.His main research interests include federated learning and backdoor attack.WANG Binjun,born in 1962,Ph.D,professor,Ph.D supervisor,is a member of China Computer Federation.His main research interests include network security and law enforcement.
  • Supported by:
    Key Program of National Social Science Foundation(20AZD114).

Abstract: Federated learning is a methodology that solves the contradiction of big data between user privacy and data sharing,and realize the concept of “data is invisible but available”.However,the federated model is at risk of backdoor attacks in the training process.The attacker trains a attack model containing a backdoor task locally,and amplifies the model parameters by a certain proportion to implant the backdoor into the federated model.Facing the backdoor threat in the training process of the horizontal federated learning,from the perspective of the game theory,this paper proposes a backdoor defense strategy and technical proposal based on the combination of random cutting and gradient clipping.After receiving the gradient from the participants,the central server randomly chooses the neural network layer from each participant,and aggregates the gradient contributions of each participant layer by layer.Then,the central sever clips gradient parameters according to gradient threshold.Gradient clipping and random cutting can weaken the influence generated by abnormal data from minority participants.It falls into platform state when the federated model learning backdoor features,so that it keeps failing on learning backdoor features without affecting the lear-ning process of target tasks.If the central server completes the federated learning during platform state,it can defend against backdoor attacks.Experimental results show that the proposed method can effectively defend against potential backdoor threats in fe-derated learning.At the same time,the accuracy of the model is ensured.Therefore,it can be applied in horizontal federated lear-ning scenarios,providing security protection for federated learning.

Key words: Horizontal federated learning, Backdoor attack, Random cutting, Gradient clipping

CLC Number: 

  • TP391
[1]MCMAHAN B,MOORE E,RAMAGE D,et al.Communication-efficient learning of deep networks from decentralized data[C]//Artificial Intelligence and Statistics.Florida:PMLR,2017:1273-1282.
[2]XU J,GLICKSBERG B S,SU C,et al.Federated learning for healthcare informatics[J].Journal of Healthcare Informatics Research,2021,5(1):1-19.
[3]LIN B Y,HE C,ZENG Z,et al.Fednlp:Benchmarking federated learning methods for natural language processing tasks[C]//Findings of the Association for Computational Linguistics:NAACL 2022.Stroudsburg:ACL,2022:157-175.
[4]BYRD D,POLYCHRONIADOU A.Differentially private secure multi-party computation for federated learning in financial applications[C]//Proceedings of the First ACM International Conference on AI in Finance.New York:ACM,2020:1-9.
[5]KAIROUZ P,MCMAHAN H B,AVENT B,et al.Advancesand open problems in federated learning[J].Foundations and Trends in Machine Learning,2021,14(1/2):1-210.
[6]TOLPEGIN V,TRUEX S,GURSOY M E,et al.Data poisoning attacks against federated learning systems[C]//European Symposium on Research in Computer Security.New York:Springer,2020:480-501.
[7]WANG H,SREENIVASAN K,RAJPUT S,et al.Attack of the tails:Yes,you really can backdoor federated learning[J].Advances in Neural Information Processing Systems,2020,33:16070-16084.
[8]GONG X,CHEN Y,HUANG H,et al.Coordinated Backdoor Attacks against Federated Learning with Model-Dependent Triggers[J].IEEE Network,2022,36(1):84-90.
[9]BONAWITZ K,IVANOV V,KREUTER B,et al.Practical secure aggregation for privacy-preserving machine learning[C]//Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security.New York:ACM,2017:1175-1191.
[10]SUN Z,KAIROUZ P,SURESH A T,et al.Can you really backdoor federated learning?[J].arXiv:1911.07963,2019.
[11]GAO J,ZHANG B,GUO X,et al.Secure Partial Aggregation:Making Federated Learning More Robust for Industry 4.0 Applications[J].IEEE Transactions on Industrial Informatics,2022,18(9):6340-6348.
[12]LI S H,ZHENG H B,CHEN J Y,et al.Neural Path Poisoning Attack Method for Federated Learning[J].Journal of Chinese Computer Systems,2023,44(7):1578-1585.
[13]BAGDASARYAN E,VEIT A,HUA Y,et al.How to backdoor federated learning[C]//International Conference on Artificial Intelligence and Statistics.New York:PMLR,2020:2938-2948.
[14]LIU Y,MA S,AAFER Y,et al.Trojaning attack on neural networks[C]//25th Annual Network and Distributed System Security Symposium.California:The Internet Society,2018:1-11.
[15]ZHANG J,HE T,SRA S,et al.Why gradient clipping accele-rates training:A theoretical justification for adaptivity[J].ar-Xiv:1905.11881,2019.
[16]CALDAS S,DUDDU S M K,WU P,et al.Leaf:A benchmark for federated settings[J].arXiv:1812.01097,2018.
[17]LI Q,DIAO Y,CHEN Q,et al.Federated learning on non-iid data silos:An experimental study[C]//2022 IEEE 38th International Conference on Data Engineering(ICDE).New York:IEEE,2022:965-978.
[18]ZHU L,HAN S.Deep leakage from gradients[C]//Advances in Neural Information Processing Systems 32:Annual Conference on Neural Information Processing Systems.New York:Curran Associates Inc,2019:14747-14756.
[1] HUANG Shuxin, ZHANG Quanxin, WANG Yajie, ZHANG Yaoyuan, LI Yuanzhang. Research Progress of Backdoor Attacks in Deep Neural Networks [J]. Computer Science, 2023, 50(9): 52-61.
[2] YING Zonghao, WU Bin. Backdoor Attack on Deep Learning Models:A Survey [J]. Computer Science, 2023, 50(3): 333-350.
[3] WEI Nan, WEI Xianglin, FAN Jianhua, XUE Yu, HU Yongyang. Backdoor Attack Against Deep Reinforcement Learning-based Spectrum Access Model [J]. Computer Science, 2023, 50(1): 351-361.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!