Computer Science ›› 2024, Vol. 51 ›› Issue (1): 335-344.doi: 10.11896/jsjkx.230500024

• Information Security • Previous Articles     Next Articles

Defense Method Against Backdoor Attack in Federated Learning for Industrial Scenarios

WANG Xun1, XU Fangmin1,2, ZHAO Chenglin1,2, LIU Hongfu1   

  1. 1 School of Information and Communication Engineering,Beijing University of Posts and Telecommunications,Beijing 100876,China
    2 Key Laboratory of Universal Wireless Communications,Ministry of Education,Beijing University of Posts and Telecommunications, Beijing 100876,China
  • Received:2023-05-05 Revised:2023-11-06 Online:2024-01-15 Published:2024-01-12
  • About author:WANG Xun,born in 1999,master.His main research interests include machine learning and machine learning security.
    XU Fangmin,born in 1982,Ph.D,associate professor.His main research intere-sts include Internet of things network and future network technology.
  • Supported by:
    National Natural Science Foundation of China(U61971050).

Abstract: As a machine learning method which can solve the problem of isolated data island and share data resources,the characteristics of federated learning are consistent with the requirements of intelligent development of industrial equipment,so that it has been applied in many industries.However,the attack methods against the federated learning architecture are constantly updated.Backdoor attack,as one of the representatives of attack methods,has the characteristics of concealment and destruction.While traditional defense schemes often fail to play a role in the federated learning framework or have insufficient ability to prevent early backdoor attacks.Therefore,it is of great significance to research the backdoor defense scheme which can be applied to the federated learning architecture.The backdoor diagnosis scheme for federated learning architecture is proposed,which can reconstruct the backdoor trigger by using the characteristics of the backdoor model without data.This scheme can realize accurate identification and removal of the backdoor model,and achieve the goal of global model backdoor defense.In addition,a new detection mecha-nism is proposed to realize the back door detection of early models.On this basis,the model judgment algorithm is optimized,and the accuracy and speed are both improved through the early exiting united judgment mode.

Key words: Federated learning, Backdoor defense, Early backdoor attack, Backdoor trigger, Early exiting united judgment

CLC Number: 

  • TP181
[1]BIRON J,KELLY S,IMMERMAN D,et al.The state of industrial Internet of Things 2019 [EB/OL].https://www.ptc.com/-/media/Files/PDFs/IoT/State-of-IIoT-Report-2019.pdf.
[2]SISINNI E,SAIFULLAH A,HAN S,et al.Industrial Internet of Things:challenges,opportunities,and directions [J].IEEE Transactions on Industrial Informatics,2018,14(11):4724-4734.
[3]LI P,LI J,HUANG Z,et al.Multi-key privacy-preserving deep learning in cloud computing [J].Future Generation Computer Systems,2017,74:76-85.
[4]NETO H N C,LOPEZ M A,FERNANDES N C,et al.Minecap:Super incremental learning for detecting and blocking cryptocurrency mining on software-defined networking [J].Annals of Telecommunications,2020,75:1-11.
[5]YANG Q,LIU Y,CHEN T,et al.Federated machine learning:Concept and applications [J].ACM Transactions on Intelligent Systems And Technology,2019,10(2):1-19.
[6]LU S W,LI R H,LIUW B,et al.Defense against backdoor attack in federated learning [J].Computers & Security,2022,121(2022):102819.
[7]KAWA D,PUNYANI S,NAYAK P,et al.Credit risk assessment from combined bank records using federated learning [J].International Research Journal of Engineering and Technology,2019,6(4):1355-1358.
[8]XU J,GLICKSBERG B S,SU C,et al.Federated learning for healthcare informatics [J].Healthcare Informatics Research,2021,5(1):1-19.
[9]LI Z.Data Heterogeneity-Robust Federated Learning via Group Client Selection in Industrial IoT [J].IEEE Internet of Things Journal,2022,9(18):17844-17857.
[10]JERE M S,FARNAN T,KOUSHANFAR F.A Taxonomy of Attacks on Federated Learning [J].IEEE Security & Privacy,2021,19(2):20-28.
[11]TRAMÈR F,ZHANG F,JUELS A,et al.Stealing machinelearning models via prediction APIs [C]//Proceedings of the 25th USENIX Conference on Security Symposium.USA:USENIX Association 2016:601-618.
[12]SHOKRI R,STRONATI M,SONG C,et al.Membership infe-rence attacks against machine learning models [C]//Procee-dings of the IEEE S&P.San Jose,CA,USA:IEEE,2017:3-18.
[13]FREDRIKSON M,JHA S,RISTENPART T.Model inversionattacks that exploit confidence information and basic countermeasures [C]//Proceedings of the 22nd ACM SIGSAC Confe-rence on Computer and Communications Security.NewYork,United States:Association for Computing Machinery,2015:1322-1333.
[14]HITAJ B,ATENIESE G,PEREZ-CRUZ F.Deep models under the GAN:Information leakage from collaborative deep learning [C]//Proceedings of the ACM SIGSAC Conference on Compu-ter Communications and Security.NewYork,United States:Association for Computing Machinery,2017:603-618.
[15]ALFELD S,ZHU X,BARFORD P.Data poisoning attacksagainst autoregressive models [C]//Proceedings of the Thir-tieth AAAI Conference on Artificial Intelligence.Phoenix Arizona:AAAI Press,2016:1452-1458.
[16]MUÑOZ-GONZÁLEZ L.Towards poisoning of deep learningalgorithms with back-gradient optimization [C]//Proceedings of the ACM Workshop AISec.2017:27-38.
[17]KOH P W,STEINHARDT J,LIANG P.Stronger data poiso-ning attacks break data sanitization defenses [J].Machine Lear-ning,2022,111:1-47.
[18]MELIS L,SONG C,CRISTOFARO E D,et al.Exploiting unintended feature leakage in collaborative learning [C]//Procee-dings of the IEEE Symposium On Security and Privacy(SP).2019:691-706.
[19]JETER T R,THAI M T.Privacy Analysis of Federated Lear-ning via Dishonest Servers [C]//Proceedings of the 2023 IEEE 9th International Conference on Big Data Security on Cloud(BigDataSecurity),IEEE International Conference on High Perfor-mance and Smart Computing(HPSC),and IEEE International Conference on Intelligent Data and Security(IDS).2023:24-29.
[20]MEI H C,LI G L,YANG X.Research on Backdoor AttackBased on Privacy Inference Non-IID Federated Learning Model [J].Modern Information Technology,2023,7(19):167-171.
[21]FUNG C,YOON C J M,BESCHASTNIKH I.The limitations of federated learning in Sybil settings[C]//Proceedings of the 23rd International Symposium on Research in Attacks,Intrusions and Defenses.2020:301-316.
[22]XIE C,HUANG K,CHEN P Y,et al.DBA:distributed backdoor attacks against federated learning [C]//Proceedings of the International Conference on Learning Representations.2020.
[23]BAGDASARYAN E,VEIT A,HUA Y,et al.How to backdoor federated learning[C]//Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics.2020:2938-2948.
[24]CHEN C L,GOLUBCHIK L,PAOLIERI M.Backdoorattackson federated meta-learning [J].arXiv:2006.07026,2020.
[25]LI X H,ZHENG H B,CHEN J Y,et al.Neural Path Poisoning Attack Method for Federated Learning [J].Journal of Chinese Computer Systems,2023,44(7):1578-1585.
[26]WANG B.Neural Cleanse:Identifying and Mitigating Backdoor Attacks in Neural Networks [C]//Proceedings of 2019 IEEE Symposium on Security and Privacy(SP).2019:707-723.
[27]LIU K,DOLAN-GAVITT B,GARG S.Fine-pruning:Defending against backdooring attacks on deep neural networks[C]//Proceedings of the Research in Attacks,Intrusions,and Defenses:21st International Symposium(RAID 2018).2018:273-294.
[28]XU W T,WANG B J.Backdoor Defense of Horizontal Federated Learning Based on Random Cutting and Gradient Clipping [J].Computer Science.2023,50(11):356-363.
[29]SHEJWALKAR V,HOUMANSADR A.Manipulating the Byzantine:Optimizing Model Poisoning Attacks and Defenses for Federated Learning [C]//Proceedings of the Network and Distributed System Security Symposium.2021.
[30]SUN Z,KAIROUZ P,SURESH A T,et al.Can you reallybackdoor federated learning? [J].arXiv:1911.07963,2019.
[31]WANG R,ZHANG G,LIU S,et al.Practical Detection of Trojan Neural Networks:Data-Limited and Data-Free Cases [C]//Proceedings of the ECCV 2020.Lecture Notes in Computer Science,2020:222-238.
[32]HUANG S,PENG W,JIA Z,et al.One-Pixel Signature:Characterizing CNN Models for Backdoor Detection [C]//Proceedings of the ECCV 2020.2020:326-341.
[33]CHEN X,LIU C,LI B,et al.Targeted Backdoor Attacks on Deep Learning Systems Using DataPoisoning [J].arXiv:1712.05526,2017.
[34]CHENG H,XU K,LIU S,et al.Defending against Backdoor Attack on Deep NeuralNetworks [J].arXiv:2002.12162,2020.
[35]YEH I C,LIEN C H.The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients [J].Expert Systems with Applications,2009,36(2):2473-2480.
[1] LI Zhi, LIN Sen, ZHANG Qiang. Edge Cloud Computing Approach for Intelligent Fault Detection in Rail Transit [J]. Computer Science, 2024, 51(9): 331-337.
[2] ZANG Hongrui, YANG Tingting, LIU Hongbo, MA Kai. Study on Cryptographic Verification of Distributed Federated Learning for Internet of Things [J]. Computer Science, 2024, 51(6A): 230700217-5.
[3] SUN Min, DING Xining, CHENG Qian. Federated Learning Scheme Based on Differential Privacy [J]. Computer Science, 2024, 51(6A): 230600211-6.
[4] TAN Zhiwen, XU Ruzhi, WANG Naiyu, LUO Dan. Differential Privacy Federated Learning Method Based on Knowledge Distillation [J]. Computer Science, 2024, 51(6A): 230600002-8.
[5] LIU Dongqi, ZHANG Qiong, LIANG Haolan, ZHANG Zidong, ZENG Xiangjun. Study on Smart Grid AMI Intrusion Detection Method Based on Federated Learning [J]. Computer Science, 2024, 51(6A): 230700077-8.
[6] WANG Chenzhuo, LU Yanrong, SHEN Jian. Study on Fingerprint Recognition Algorithm for Fairness in Federated Learning [J]. Computer Science, 2024, 51(6A): 230800043-9.
[7] ZHOU Tianyang, YANG Lei. Study on Client Selection Strategy and Dataset Partition in Federated Learning Basedon Edge TB [J]. Computer Science, 2024, 51(6A): 230800046-6.
[8] LIU Jianxun, ZHANG Xinglin. Federated Learning Client Selection Scheme Based on Time-varying Computing Resources [J]. Computer Science, 2024, 51(6): 354-363.
[9] XU Yicheng, DAI Chaofan, MA Wubin, WU Yahui, ZHOU Haohao, LU Chenyang. Particle Swarm Optimization-based Federated Learning Method for Heterogeneous Data [J]. Computer Science, 2024, 51(6): 391-398.
[10] LU Yanfeng, WU Tao, LIU Chunsheng, YAN Kang, QU Yuben. Survey of UAV-assisted Energy-Efficient Edge Federated Learning [J]. Computer Science, 2024, 51(4): 270-279.
[11] WANG Degang, SUN Yi, GAO Qi. Active Membership Inference Attack Method Based on Multiple Redundant Neurons [J]. Computer Science, 2024, 51(4): 373-380.
[12] HUANG Nan, LI Dongdong, YAO Jia, WANG Zhe. Decentralized Federated Continual Learning Method Combined with Meta-learning [J]. Computer Science, 2024, 51(3): 271-279.
[13] WANG Xin, HUANG Weikou, SUN Lingyun. Survey of Incentive Mechanism for Cross-silo Federated Learning [J]. Computer Science, 2024, 51(3): 20-29.
[14] WANG Zhousheng, YANG Geng, DAI Hua. Lightweight Differential Privacy Federated Learning Based on Gradient Dropout [J]. Computer Science, 2024, 51(1): 345-354.
[15] ZHAO Yuhao, CHEN Siguang, SU Jian. Privacy-enhanced Federated Learning Algorithm Against Inference Attack [J]. Computer Science, 2023, 50(9): 62-67.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!