计算机科学 ›› 2024, Vol. 51 ›› Issue (1): 335-344.doi: 10.11896/jsjkx.230500024

• 信息安全 • 上一篇    下一篇

工业场景下联邦学习中基于模型诊断的后门防御方法

王迅1, 许方敏1,2, 赵成林1,2, 刘宏福1   

  1. 1 北京邮电大学信息与通信工程学院 北京100876
    2 北京邮电大学泛网无线通信教育部重点实验室 北京100876
  • 收稿日期:2023-05-05 修回日期:2023-11-06 出版日期:2024-01-15 发布日期:2024-01-12
  • 通讯作者: 许方敏(xufm@bupt.edu.cn)
  • 作者简介:(wangxun68@bupt.edu.cn)
  • 基金资助:
    国家自然科学基金(U61971050)

Defense Method Against Backdoor Attack in Federated Learning for Industrial Scenarios

WANG Xun1, XU Fangmin1,2, ZHAO Chenglin1,2, LIU Hongfu1   

  1. 1 School of Information and Communication Engineering,Beijing University of Posts and Telecommunications,Beijing 100876,China
    2 Key Laboratory of Universal Wireless Communications,Ministry of Education,Beijing University of Posts and Telecommunications, Beijing 100876,China
  • Received:2023-05-05 Revised:2023-11-06 Online:2024-01-15 Published:2024-01-12
  • About author:WANG Xun,born in 1999,master.His main research interests include machine learning and machine learning security.
    XU Fangmin,born in 1982,Ph.D,associate professor.His main research intere-sts include Internet of things network and future network technology.
  • Supported by:
    National Natural Science Foundation of China(U61971050).

摘要: 联邦学习作为一种能够解决数据孤岛问题、实现数据资源共享的机器学习方法,其特点与工业设备智能化发展的要求相契合。因此,以联邦学习为代表的人工智能技术在工业互联网中的应用越来越广泛。但是,针对联邦学习架构的攻击手段也在不断更新。后门攻击作为攻击手段的代表之一,有着隐蔽性和破坏性强的特点,而传统的防御方案往往无法在联邦学习架构下发挥作用或者对早期攻击防范能力不足。因此,研究适用于联邦学习架构的后门防御方案具有重大意义。文中提出了一种适用于联邦学习架构的后门诊断方案,能够在无数据情况下利用后门模型的形成特点重构后门触发器,实现准确识别并移除后门模型,从而达到全局模型后门防御的目的。此外,还提出了一种新的检测机制实现对早期模型的后门检测,并在此基础上优化了模型判决算法,通过早退联合判决模式实现了准确率与速度的共同提升。

关键词: 联邦学习, 后门防御, 早期后门攻击, 后门触发器, 早退联合判决

Abstract: As a machine learning method which can solve the problem of isolated data island and share data resources,the characteristics of federated learning are consistent with the requirements of intelligent development of industrial equipment,so that it has been applied in many industries.However,the attack methods against the federated learning architecture are constantly updated.Backdoor attack,as one of the representatives of attack methods,has the characteristics of concealment and destruction.While traditional defense schemes often fail to play a role in the federated learning framework or have insufficient ability to prevent early backdoor attacks.Therefore,it is of great significance to research the backdoor defense scheme which can be applied to the federated learning architecture.The backdoor diagnosis scheme for federated learning architecture is proposed,which can reconstruct the backdoor trigger by using the characteristics of the backdoor model without data.This scheme can realize accurate identification and removal of the backdoor model,and achieve the goal of global model backdoor defense.In addition,a new detection mecha-nism is proposed to realize the back door detection of early models.On this basis,the model judgment algorithm is optimized,and the accuracy and speed are both improved through the early exiting united judgment mode.

Key words: Federated learning, Backdoor defense, Early backdoor attack, Backdoor trigger, Early exiting united judgment

中图分类号: 

  • TP181
[1]BIRON J,KELLY S,IMMERMAN D,et al.The state of industrial Internet of Things 2019 [EB/OL].https://www.ptc.com/-/media/Files/PDFs/IoT/State-of-IIoT-Report-2019.pdf.
[2]SISINNI E,SAIFULLAH A,HAN S,et al.Industrial Internet of Things:challenges,opportunities,and directions [J].IEEE Transactions on Industrial Informatics,2018,14(11):4724-4734.
[3]LI P,LI J,HUANG Z,et al.Multi-key privacy-preserving deep learning in cloud computing [J].Future Generation Computer Systems,2017,74:76-85.
[4]NETO H N C,LOPEZ M A,FERNANDES N C,et al.Minecap:Super incremental learning for detecting and blocking cryptocurrency mining on software-defined networking [J].Annals of Telecommunications,2020,75:1-11.
[5]YANG Q,LIU Y,CHEN T,et al.Federated machine learning:Concept and applications [J].ACM Transactions on Intelligent Systems And Technology,2019,10(2):1-19.
[6]LU S W,LI R H,LIUW B,et al.Defense against backdoor attack in federated learning [J].Computers & Security,2022,121(2022):102819.
[7]KAWA D,PUNYANI S,NAYAK P,et al.Credit risk assessment from combined bank records using federated learning [J].International Research Journal of Engineering and Technology,2019,6(4):1355-1358.
[8]XU J,GLICKSBERG B S,SU C,et al.Federated learning for healthcare informatics [J].Healthcare Informatics Research,2021,5(1):1-19.
[9]LI Z.Data Heterogeneity-Robust Federated Learning via Group Client Selection in Industrial IoT [J].IEEE Internet of Things Journal,2022,9(18):17844-17857.
[10]JERE M S,FARNAN T,KOUSHANFAR F.A Taxonomy of Attacks on Federated Learning [J].IEEE Security & Privacy,2021,19(2):20-28.
[11]TRAMÈR F,ZHANG F,JUELS A,et al.Stealing machinelearning models via prediction APIs [C]//Proceedings of the 25th USENIX Conference on Security Symposium.USA:USENIX Association 2016:601-618.
[12]SHOKRI R,STRONATI M,SONG C,et al.Membership infe-rence attacks against machine learning models [C]//Procee-dings of the IEEE S&P.San Jose,CA,USA:IEEE,2017:3-18.
[13]FREDRIKSON M,JHA S,RISTENPART T.Model inversionattacks that exploit confidence information and basic countermeasures [C]//Proceedings of the 22nd ACM SIGSAC Confe-rence on Computer and Communications Security.NewYork,United States:Association for Computing Machinery,2015:1322-1333.
[14]HITAJ B,ATENIESE G,PEREZ-CRUZ F.Deep models under the GAN:Information leakage from collaborative deep learning [C]//Proceedings of the ACM SIGSAC Conference on Compu-ter Communications and Security.NewYork,United States:Association for Computing Machinery,2017:603-618.
[15]ALFELD S,ZHU X,BARFORD P.Data poisoning attacksagainst autoregressive models [C]//Proceedings of the Thir-tieth AAAI Conference on Artificial Intelligence.Phoenix Arizona:AAAI Press,2016:1452-1458.
[16]MUÑOZ-GONZÁLEZ L.Towards poisoning of deep learningalgorithms with back-gradient optimization [C]//Proceedings of the ACM Workshop AISec.2017:27-38.
[17]KOH P W,STEINHARDT J,LIANG P.Stronger data poiso-ning attacks break data sanitization defenses [J].Machine Lear-ning,2022,111:1-47.
[18]MELIS L,SONG C,CRISTOFARO E D,et al.Exploiting unintended feature leakage in collaborative learning [C]//Procee-dings of the IEEE Symposium On Security and Privacy(SP).2019:691-706.
[19]JETER T R,THAI M T.Privacy Analysis of Federated Lear-ning via Dishonest Servers [C]//Proceedings of the 2023 IEEE 9th International Conference on Big Data Security on Cloud(BigDataSecurity),IEEE International Conference on High Perfor-mance and Smart Computing(HPSC),and IEEE International Conference on Intelligent Data and Security(IDS).2023:24-29.
[20]MEI H C,LI G L,YANG X.Research on Backdoor AttackBased on Privacy Inference Non-IID Federated Learning Model [J].Modern Information Technology,2023,7(19):167-171.
[21]FUNG C,YOON C J M,BESCHASTNIKH I.The limitations of federated learning in Sybil settings[C]//Proceedings of the 23rd International Symposium on Research in Attacks,Intrusions and Defenses.2020:301-316.
[22]XIE C,HUANG K,CHEN P Y,et al.DBA:distributed backdoor attacks against federated learning [C]//Proceedings of the International Conference on Learning Representations.2020.
[23]BAGDASARYAN E,VEIT A,HUA Y,et al.How to backdoor federated learning[C]//Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics.2020:2938-2948.
[24]CHEN C L,GOLUBCHIK L,PAOLIERI M.Backdoorattackson federated meta-learning [J].arXiv:2006.07026,2020.
[25]LI X H,ZHENG H B,CHEN J Y,et al.Neural Path Poisoning Attack Method for Federated Learning [J].Journal of Chinese Computer Systems,2023,44(7):1578-1585.
[26]WANG B.Neural Cleanse:Identifying and Mitigating Backdoor Attacks in Neural Networks [C]//Proceedings of 2019 IEEE Symposium on Security and Privacy(SP).2019:707-723.
[27]LIU K,DOLAN-GAVITT B,GARG S.Fine-pruning:Defending against backdooring attacks on deep neural networks[C]//Proceedings of the Research in Attacks,Intrusions,and Defenses:21st International Symposium(RAID 2018).2018:273-294.
[28]XU W T,WANG B J.Backdoor Defense of Horizontal Federated Learning Based on Random Cutting and Gradient Clipping [J].Computer Science.2023,50(11):356-363.
[29]SHEJWALKAR V,HOUMANSADR A.Manipulating the Byzantine:Optimizing Model Poisoning Attacks and Defenses for Federated Learning [C]//Proceedings of the Network and Distributed System Security Symposium.2021.
[30]SUN Z,KAIROUZ P,SURESH A T,et al.Can you reallybackdoor federated learning? [J].arXiv:1911.07963,2019.
[31]WANG R,ZHANG G,LIU S,et al.Practical Detection of Trojan Neural Networks:Data-Limited and Data-Free Cases [C]//Proceedings of the ECCV 2020.Lecture Notes in Computer Science,2020:222-238.
[32]HUANG S,PENG W,JIA Z,et al.One-Pixel Signature:Characterizing CNN Models for Backdoor Detection [C]//Proceedings of the ECCV 2020.2020:326-341.
[33]CHEN X,LIU C,LI B,et al.Targeted Backdoor Attacks on Deep Learning Systems Using DataPoisoning [J].arXiv:1712.05526,2017.
[34]CHENG H,XU K,LIU S,et al.Defending against Backdoor Attack on Deep NeuralNetworks [J].arXiv:2002.12162,2020.
[35]YEH I C,LIEN C H.The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients [J].Expert Systems with Applications,2009,36(2):2473-2480.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!