Computer Science ›› 2022, Vol. 49 ›› Issue (6A): 496-501.doi: 10.11896/jsjkx.210400298

• Information Security • Previous Articles     Next Articles

Training Method to Improve Robustness of Federated Learning

YAN Meng1, LIN Ying1,2, NIE Zhi-shen1, CAO Yi-fan1, PI Huan1, ZHANG Lan1   

  1. 1 School of Software,Yunnan University,Kunming 650091,China
    2 Key Laboratory for Software Engineering of Yunnan Province,Kunming 650091,China
  • Online:2022-06-10 Published:2022-06-08
  • About author:YAN Meng,born in 1995,master.His main research interests include information security and software engineering.
    LIN Ying,born in 1973,Ph.D,associate professor,is a member of China Compu-ter Federation.Her main research interest is information security.

Abstract: The federated learning method breaks the bottleneck of data barriers and has been widely used in finacial and medical aided diagnosis.However,the existence of counter attacks poses a potential threat to the security of federated learning models.In order to ensure the safety of the federated learning model in actual scenarios,improving the robustness of the model is a good solution.Adversarial training is a common method to enhance the robustness of the model,but the traditional adversarial training methods are all aimed at centralized machine learning,and usually can only defend against specific adversarial attacks.This paper proposes an adversarial training method to enhance the robustness of the model in a federated learning scenario.This method adds single-strength adversarial samples,iterative adversarial samplesand normal samples in the training process,and adjusts the weight of the loss function under each training sample to complete the local training and the update of global model.Experiments on Mnist and Fashion_Mnist datasetsshow that although the adversarial training is only based on FGSM and PGD,this adversa-rial training method greatly enhances the robustness of the federated model in different attack scenarios.The adversarial training is based on FGSM and PGD,and is also effective for other attck methods such as FFGSM,PGDDLR,BIM and so on.

Key words: Adversarial attack, Adversarial training, Federated learning, Robustness

CLC Number: 

  • TP309
[1] MCMAHAN H B,MOORE E,RAMAGE D,et al.Federated Learning of Deep Networks using Model Averaging[EB/OL].https://arxiv.org/abs/1602.05629.
[2] WENG T W,ZHANG H,CHEN P Y,et al.Evaluating the robustness of neural networks:an extreme value theory approach [EB/OL].[2019-12-19].https://arxiv.org/abs/1801.10578.
[3] MOUSTAPHA C,PIOTR B,EDOUARDG,et al.Robustness to adversarial examples can be improved with overfitting[J].Parseval Networks,2020,4(11):935-944.
[4] GOODFELLOWI J,SHLENS J,SZEGEDY C.Explaining andharnessing adversarial examples[C]//International Conference on Machine Learning.2015:1-11.
[5] HENDRYCKS D,GIMPEL K.Visible progress on adversarial images and a new saliency map[J].arXiv:1608.00530,2016.
[6] MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks[J].arXiv:1706.06083,2017.
[7] MIYATO T,DAI A M,GOODFELLOW I.Adversarial Training Methods for Semi-Supervised Text Classification[J].arXiv:1605.07725,2016.
[8] CAI Q Z,LIU C,SONG D.Curriculum Adversarial Training[C]//Proceedings of the 27th International Joint Conference on Artificial Intelligence(IJCAI).2018:3740-3747.
[9] TRAME R F,KURAKIN A,PAPERNOTN,et al.Ensembleadversarial training:attacks and defenses[J].arXiv:1705.07204,2017.
[10] FENG S W,YU H.Multi-participant multi-class vertical federated learning[J].arXiv:2001.11154,2020.
[11] KONEČNÝ J,MCMAHANH B,YU F X,et al.Federated lear-ning:strategies for improving communication efficiency[J].ar-Xiv:1610.05492,2016.
[12] KONEČNÝ J.Stochastic,distributed and federated optimization for machine learning[J].arXiv:1707.01155,2017.
[13] FANG C,GUO Y,HU Y,et al.Privacy-preserving and communication-efficient federated learning in internet of things[J].Computers & Security,2021,103:102199.
[14] LIU X,LI H,XU G,et al.Adaptive privacy-preserving federated learning[J].Peer-to-Peer Networking and Applications,2020,13(6):2356-2366.
[15] KONEN J,MCMAHAN H B,RAMAGE D,et al.FederatedOptimization:Distributed Machine Learning for On-Device Intelligence[J].arXiv:1610.02527,2016.
[16] CARLINI N,WAGNER D.Towards evaluating therobustnessofneural networks[C]//IEEE Symposium on Security and Privacy(SP).2017:39-57.
[17] GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and harnessing adversarial examples[J].arXiv:1412.6572,2014.
[18] MOOSAVI-DEZFOOLI S M,FAWZI A,FROSSARD P.Deepfool:a simple and accurate method to fool deep neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2574-2582.
[19] CARLINI N,WAGNER D.Towards evaluating the robustness of neural networks[C]//Proceedings of the IEEE Symposium on Security and Privacy.2017:39-57.
[20] SU J,VARGAS D V,SAKURAI K.One pixel attack for fooling deep neural networks[J].Proceedings of IEEE Transactions on Evolutionary Computation,2019,23(5):828-841.
[21] MOOSAVI-DEZFOOLISM,FAWZI A,FAWZI O,et al.Universal adversarial perturbations[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:1765-1773.
[22] SHARIF M,BHAGAVATULA S,BAUER L,et al.Accessorize to a crime:Real and stealthy attackson state-of-the-art facerecog-nition[C]//Proceedings of the 2016 ACM Sigsac Conference on Computer and Communications Security.2016:1528-1540.
[23] KURAKIN A,GOODFELLOW I,BENGIO S.Adversarial machine learning at scale[J].arXiv:1611.01236,2016.
[24] DONG Y,LIAO F,PANG T,et al.Boosting adversarial attacks with momentum[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:9185-9193.
[1] LU Chen-yang, DENG Su, MA Wu-bin, WU Ya-hui, ZHOU Hao-hao. Federated Learning Based on Stratified Sampling Optimization for Heterogeneous Clients [J]. Computer Science, 2022, 49(9): 183-193.
[2] TANG Ling-tao, WANG Di, ZHANG Lu-fei, LIU Sheng-yun. Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy [J]. Computer Science, 2022, 49(9): 297-305.
[3] HAO Zhi-rong, CHEN Long, HUANG Jia-cheng. Class Discriminative Universal Adversarial Attack for Text Classification [J]. Computer Science, 2022, 49(8): 323-329.
[4] ZHOU Hui, SHI Hao-chen, TU Yao-feng, HUANG Sheng-jun. Robust Deep Neural Network Learning Based on Active Sampling [J]. Computer Science, 2022, 49(7): 164-169.
[5] CHEN Ming-xin, ZHANG Jun-bo, LI Tian-rui. Survey on Attacks and Defenses in Federated Learning [J]. Computer Science, 2022, 49(7): 310-323.
[6] WU Zi-bin, YAN Qiao. Projected Gradient Descent Algorithm with Momentum [J]. Computer Science, 2022, 49(6A): 178-183.
[7] XU Guo-ning, CHEN Yi-peng, CHEN Yi-ming, CHEN Jin-yin, WEN Hao. Data Debiasing Method Based on Constrained Optimized Generative Adversarial Networks [J]. Computer Science, 2022, 49(6A): 184-190.
[8] LU Chen-yang, DENG Su, MA Wu-bin, WU Ya-hui, ZHOU Hao-hao. Clustered Federated Learning Methods Based on DBSCAN Clustering [J]. Computer Science, 2022, 49(6A): 232-237.
[9] DU Hui, LI Zhuo, CHEN Xin. Incentive Mechanism for Hierarchical Federated Learning Based on Online Double Auction [J]. Computer Science, 2022, 49(3): 23-30.
[10] WANG Xin, ZHOU Ze-bao, YU Yun, CHEN Yu-xu, REN Hao-wen, JIANG Yi-bo, SUN Ling-yun. Reliable Incentive Mechanism for Federated Learning of Electric Metering Data [J]. Computer Science, 2022, 49(3): 31-38.
[11] ZHAO Luo-cheng, QU Zhi-hao, XIE Zai-peng. Study on Communication Optimization of Federated Learning in Multi-layer Wireless Edge Environment [J]. Computer Science, 2022, 49(3): 39-45.
[12] LI Jian, GUO Yan-ming, YU Tian-yuan, WU Yu-lun, WANG Xiang-han, LAO Song-yang. Multi-target Category Adversarial Example Generating Algorithm Based on GAN [J]. Computer Science, 2022, 49(2): 83-91.
[13] CHEN Meng-xuan, ZHANG Zhen-yong, JI Shou-ling, WEI Gui-yi, SHAO Jun. Survey of Research Progress on Adversarial Examples in Images [J]. Computer Science, 2022, 49(2): 92-106.
[14] ZHANG Cheng-rui, CHEN Jun-jie, GUO Hao. Comparative Analysis of Robustness of Resting Human Brain Functional Hypernetwork Model [J]. Computer Science, 2022, 49(2): 241-247.
[15] JING Hui-yun, ZHOU Chuan, HE Xin. Security Evaluation Method for Risk of Adversarial Attack on Face Detection [J]. Computer Science, 2021, 48(7): 17-24.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!