计算机科学 ›› 2022, Vol. 49 ›› Issue (6A): 496-501.doi: 10.11896/jsjkx.210400298

• 信息安全 • 上一篇    下一篇

一种提高联邦学习模型鲁棒性的训练方法

闫萌1, 林英1,2, 聂志深1, 曹一凡1, 皮欢1, 张兰1   

  1. 1 云南大学软件学院 昆明 650091
    2 云南省软件工程重点实验室 昆明 650091
  • 出版日期:2022-06-10 发布日期:2022-06-08
  • 通讯作者: 林英(linying@ynu.edu.cn)
  • 作者简介:(954443436@qq.com)

Training Method to Improve Robustness of Federated Learning

YAN Meng1, LIN Ying1,2, NIE Zhi-shen1, CAO Yi-fan1, PI Huan1, ZHANG Lan1   

  1. 1 School of Software,Yunnan University,Kunming 650091,China
    2 Key Laboratory for Software Engineering of Yunnan Province,Kunming 650091,China
  • Online:2022-06-10 Published:2022-06-08
  • About author:YAN Meng,born in 1995,master.His main research interests include information security and software engineering.
    LIN Ying,born in 1973,Ph.D,associate professor,is a member of China Compu-ter Federation.Her main research interest is information security.

摘要: 联邦学习方法打破了数据壁垒的瓶颈,已被应用到金融、医疗辅助诊断等领域。但基于对抗样本发起的对抗攻击,给联邦学习模型的安全带来了极大的威胁。基于对抗训练提高模型鲁棒性是保证模型安全性的一个较好解决办法,但目前对抗训练方法主要针对集中式机器学习,并通常只能防御特定的对抗攻击。为此,提出了一种联邦学习场景下,增强模型鲁棒性的对抗训练方法。该方法通过分别加入单强度对抗样本、迭代对抗样本以及正常样本进行训练,并通过调整各训练样本下损失函数的权重,完成了本地训练以及全局模型的更新。在Mnist以及Fashion_Mnist数据集上开展实验,实验结果表明该对抗训练方式极大地增强了联邦模型在不同攻击场景下的鲁棒性,该对抗训练基于FGSM以及PGD展开,且对FFGSM,PGDDLR和BIM等其他攻击方法同样有效。

关键词: 对抗攻击, 对抗训练, 联邦学习, 鲁棒性

Abstract: The federated learning method breaks the bottleneck of data barriers and has been widely used in finacial and medical aided diagnosis.However,the existence of counter attacks poses a potential threat to the security of federated learning models.In order to ensure the safety of the federated learning model in actual scenarios,improving the robustness of the model is a good solution.Adversarial training is a common method to enhance the robustness of the model,but the traditional adversarial training methods are all aimed at centralized machine learning,and usually can only defend against specific adversarial attacks.This paper proposes an adversarial training method to enhance the robustness of the model in a federated learning scenario.This method adds single-strength adversarial samples,iterative adversarial samplesand normal samples in the training process,and adjusts the weight of the loss function under each training sample to complete the local training and the update of global model.Experiments on Mnist and Fashion_Mnist datasetsshow that although the adversarial training is only based on FGSM and PGD,this adversa-rial training method greatly enhances the robustness of the federated model in different attack scenarios.The adversarial training is based on FGSM and PGD,and is also effective for other attck methods such as FFGSM,PGDDLR,BIM and so on.

Key words: Adversarial attack, Adversarial training, Federated learning, Robustness

中图分类号: 

  • TP309
[1] MCMAHAN H B,MOORE E,RAMAGE D,et al.Federated Learning of Deep Networks using Model Averaging[EB/OL].https://arxiv.org/abs/1602.05629.
[2] WENG T W,ZHANG H,CHEN P Y,et al.Evaluating the robustness of neural networks:an extreme value theory approach [EB/OL].[2019-12-19].https://arxiv.org/abs/1801.10578.
[3] MOUSTAPHA C,PIOTR B,EDOUARDG,et al.Robustness to adversarial examples can be improved with overfitting[J].Parseval Networks,2020,4(11):935-944.
[4] GOODFELLOWI J,SHLENS J,SZEGEDY C.Explaining andharnessing adversarial examples[C]//International Conference on Machine Learning.2015:1-11.
[5] HENDRYCKS D,GIMPEL K.Visible progress on adversarial images and a new saliency map[J].arXiv:1608.00530,2016.
[6] MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks[J].arXiv:1706.06083,2017.
[7] MIYATO T,DAI A M,GOODFELLOW I.Adversarial Training Methods for Semi-Supervised Text Classification[J].arXiv:1605.07725,2016.
[8] CAI Q Z,LIU C,SONG D.Curriculum Adversarial Training[C]//Proceedings of the 27th International Joint Conference on Artificial Intelligence(IJCAI).2018:3740-3747.
[9] TRAME R F,KURAKIN A,PAPERNOTN,et al.Ensembleadversarial training:attacks and defenses[J].arXiv:1705.07204,2017.
[10] FENG S W,YU H.Multi-participant multi-class vertical federated learning[J].arXiv:2001.11154,2020.
[11] KONEČNÝ J,MCMAHANH B,YU F X,et al.Federated lear-ning:strategies for improving communication efficiency[J].ar-Xiv:1610.05492,2016.
[12] KONEČNÝ J.Stochastic,distributed and federated optimization for machine learning[J].arXiv:1707.01155,2017.
[13] FANG C,GUO Y,HU Y,et al.Privacy-preserving and communication-efficient federated learning in internet of things[J].Computers & Security,2021,103:102199.
[14] LIU X,LI H,XU G,et al.Adaptive privacy-preserving federated learning[J].Peer-to-Peer Networking and Applications,2020,13(6):2356-2366.
[15] KONEN J,MCMAHAN H B,RAMAGE D,et al.FederatedOptimization:Distributed Machine Learning for On-Device Intelligence[J].arXiv:1610.02527,2016.
[16] CARLINI N,WAGNER D.Towards evaluating therobustnessofneural networks[C]//IEEE Symposium on Security and Privacy(SP).2017:39-57.
[17] GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and harnessing adversarial examples[J].arXiv:1412.6572,2014.
[18] MOOSAVI-DEZFOOLI S M,FAWZI A,FROSSARD P.Deepfool:a simple and accurate method to fool deep neural networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2016:2574-2582.
[19] CARLINI N,WAGNER D.Towards evaluating the robustness of neural networks[C]//Proceedings of the IEEE Symposium on Security and Privacy.2017:39-57.
[20] SU J,VARGAS D V,SAKURAI K.One pixel attack for fooling deep neural networks[J].Proceedings of IEEE Transactions on Evolutionary Computation,2019,23(5):828-841.
[21] MOOSAVI-DEZFOOLISM,FAWZI A,FAWZI O,et al.Universal adversarial perturbations[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2017:1765-1773.
[22] SHARIF M,BHAGAVATULA S,BAUER L,et al.Accessorize to a crime:Real and stealthy attackson state-of-the-art facerecog-nition[C]//Proceedings of the 2016 ACM Sigsac Conference on Computer and Communications Security.2016:1528-1540.
[23] KURAKIN A,GOODFELLOW I,BENGIO S.Adversarial machine learning at scale[J].arXiv:1611.01236,2016.
[24] DONG Y,LIAO F,PANG T,et al.Boosting adversarial attacks with momentum[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.2018:9185-9193.
[1] 鲁晨阳, 邓苏, 马武彬, 吴亚辉, 周浩浩.
基于分层抽样优化的面向异构客户端的联邦学习
Federated Learning Based on Stratified Sampling Optimization for Heterogeneous Clients
计算机科学, 2022, 49(9): 183-193. https://doi.org/10.11896/jsjkx.220500263
[2] 汤凌韬, 王迪, 张鲁飞, 刘盛云.
基于安全多方计算和差分隐私的联邦学习方案
Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy
计算机科学, 2022, 49(9): 297-305. https://doi.org/10.11896/jsjkx.210800108
[3] 郝志荣, 陈龙, 黄嘉成.
面向文本分类的类别区分式通用对抗攻击方法
Class Discriminative Universal Adversarial Attack for Text Classification
计算机科学, 2022, 49(8): 323-329. https://doi.org/10.11896/jsjkx.220200077
[4] 周慧, 施皓晨, 屠要峰, 黄圣君.
基于主动采样的深度鲁棒神经网络学习
Robust Deep Neural Network Learning Based on Active Sampling
计算机科学, 2022, 49(7): 164-169. https://doi.org/10.11896/jsjkx.210600044
[5] 陈明鑫, 张钧波, 李天瑞.
联邦学习攻防研究综述
Survey on Attacks and Defenses in Federated Learning
计算机科学, 2022, 49(7): 310-323. https://doi.org/10.11896/jsjkx.211000079
[6] 鲁晨阳, 邓苏, 马武彬, 吴亚辉, 周浩浩.
基于DBSCAN聚类的集群联邦学习方法
Clustered Federated Learning Methods Based on DBSCAN Clustering
计算机科学, 2022, 49(6A): 232-237. https://doi.org/10.11896/jsjkx.211100059
[7] 吴子斌, 闫巧.
基于动量的映射式梯度下降算法
Projected Gradient Descent Algorithm with Momentum
计算机科学, 2022, 49(6A): 178-183. https://doi.org/10.11896/jsjkx.210500039
[8] 徐国宁, 陈奕芃, 陈一鸣, 陈晋音, 温浩.
基于约束优化生成式对抗网络的数据去偏方法
Data Debiasing Method Based on Constrained Optimized Generative Adversarial Networks
计算机科学, 2022, 49(6A): 184-190. https://doi.org/10.11896/jsjkx.210400234
[9] 杜辉, 李卓, 陈昕.
基于在线双边拍卖的分层联邦学习激励机制
Incentive Mechanism for Hierarchical Federated Learning Based on Online Double Auction
计算机科学, 2022, 49(3): 23-30. https://doi.org/10.11896/jsjkx.210800051
[10] 王鑫, 周泽宝, 余芸, 陈禹旭, 任昊文, 蒋一波, 孙凌云.
一种面向电能量数据的联邦学习可靠性激励机制
Reliable Incentive Mechanism for Federated Learning of Electric Metering Data
计算机科学, 2022, 49(3): 31-38. https://doi.org/10.11896/jsjkx.210700195
[11] 赵罗成, 屈志昊, 谢在鹏.
面向多层无线边缘环境下的联邦学习通信优化的研究
Study on Communication Optimization of Federated Learning in Multi-layer Wireless Edge Environment
计算机科学, 2022, 49(3): 39-45. https://doi.org/10.11896/jsjkx.210800054
[12] 李建, 郭延明, 于天元, 武与伦, 王翔汉, 老松杨.
基于生成对抗网络的多目标类别对抗样本生成算法
Multi-target Category Adversarial Example Generating Algorithm Based on GAN
计算机科学, 2022, 49(2): 83-91. https://doi.org/10.11896/jsjkx.210800130
[13] 陈梦轩, 张振永, 纪守领, 魏贵义, 邵俊.
图像对抗样本研究综述
Survey of Research Progress on Adversarial Examples in Images
计算机科学, 2022, 49(2): 92-106. https://doi.org/10.11896/jsjkx.210800087
[14] 张程瑞, 陈俊杰, 郭浩.
静息态人脑功能超网络模型鲁棒性对比分析
Comparative Analysis of Robustness of Resting Human Brain Functional Hypernetwork Model
计算机科学, 2022, 49(2): 241-247. https://doi.org/10.11896/jsjkx.201200067
[15] 景慧昀, 周川, 贺欣.
针对人脸检测对抗攻击风险的安全测评方法
Security Evaluation Method for Risk of Adversarial Attack on Face Detection
计算机科学, 2021, 48(7): 17-24. https://doi.org/10.11896/jsjkx.210300305
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!