计算机科学 ›› 2024, Vol. 51 ›› Issue (11A): 240100176-9.doi: 10.11896/jsjkx.240100176
许文韬1, 王斌君1, 朱莉欣2, 王晗旭1, 龚颖1
XU Wentao1, WANG Binjun1, ZHU Lixin2, WANG Hanxu1, GONG Ying1
摘要: 联邦学习易受到基于模型替换的后门攻击。针对目前后门检测方法效果不佳的问题,提出横向联邦学习后门的多方共治防范策略,旨在建立联邦学习中心服务器与客户端共治机制,从而在不破坏数据隐私与主任务性能的前提下有效检测并防范模型中的后门。该策略涵盖浅层后门扫描、深层后门检测和模型修复等内容,均由客户端在中心服务器的协同下完成。其中,浅层后门扫描是一种轻量级的实时后门检测方案,其并不显著增加时间开销。该方案由客户端捕捉聚合后模型参数的异常变化,并向中心服务器报告。当报告数达到设定的阈值时,中心服务器启动深层后门检测,各客户端会暂停联邦学习进程,进行深度检测,以确定模型中的神经元是否受到后门攻击的影响而表现异常。若存在异常,各客户端采用良性模型与受攻击模型拼接的方法,将模型恢复至良性状态,并将深层后门检测的结果以及模型修复方案提交至中心服务器,由中心服务器决定最终修复方案,从而彻底清除后门。实验结果表明,该策略可以有效地检测并清除联邦学习模型中存在的后门,为横向联邦学习的安全运行保驾护航。
中图分类号:
[1]CHEN H L,FU C,ZHAO J S,et al.DeepInspect:A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks[C]//IJCAI.2019:4658-4664. [2]MCMAHAN B,MOORE E,RAMAGE D,et al.Communica-tion-efficient learning of deep networks from decentralized data[C]//Artificial Intelligence and Statistics.PMLR,2017:1273-1282. [3]KAIROUZ P,MCMAHAN H B,AVENT B,et al.Advancesand open problems in federated learning[J].Foundations and Trends in Machine Learning,2021,14(1/2):1-210. [4]GU T Y,DOLAN-GAVITT B,GARG S.Badnets:Identifyingvulnerabilities in the machine learning model supply chain[J].arXiv:1708.06733,2017. [5]BAGDASARYAN E,VEIT A,HUA Y,et al.How to backdoorfederated learning[C]//International Conference on Artificial Intelligence and Statistics.New York:PMLR,2020:2938-2948. [6]WANG H,SREENIVASAN K,RAJPUT S,et al.Attack of thetails:Yes,you really can backdoor federated learning[J].Advances in Neural Information Processing Systems,2020,33:16070-16084. [7]GU T,LIU K,DOLAN-GAVITT B,et al.Badnets:Evaluatingbackdooring attacks on deep neural networks[J].IEEE Access,2019,7:47230-47244. [8]CHEN X,LIU C,LI B,et al.Targeted backdoor attacks on deep learning systems using data poisoning[J].arXiv:1712.05526,2017. [9]BONAWITZ K,IVANOV V,KREUTER B,et al.Practical se-cure aggregation for privacy-preserving machine learning[C]//Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security.New York:ACM,2017:1175-1191. [10]SUN Z,KAIROUZ P,SURESH A T,et al.Can you really backdoor federated learning?[J].arXiv:1911.07963,2019. [11]XU W T,WANG B J.Backdoor Defense of Horizontal Federated Learning Based on Random Cutting and Gradient Clipping[J].Computer Science,2023,50(11):356-363. [12]YIN D,CHEN Y,KANNAN R,et al.Byzantine-robust distri-buted learning:Towards optimal statistical rates[C]//International Conference on Machine Learning.PMLR,2018:5650-5659. [13]MI Y,GUAN J,ZHOU S.Ariba:Towards accurate and robust identification of backdoor attacks in federated learning[J].ar-Xiv:2202.04311,2022. [14]FUNG C,YOON C J M,BESCHASTNIKH I.The limitations offederated learning in sybil settings[C]//23rd International Symposium on Research in Attacks,Intrusions and Defenses(RAID 2020).2020:301-316. [15]ANDREINA S,MARSON G A,MÖLLERING H,et al.Baffle:Backdoor detection via feedback-based federated learning[C]//2021 IEEE 41st International Conference on Distributed Computing Systems(ICDCS).IEEE,2021:852-863. [16]WANG B,YAO Y,SHAN S,et al.Neural cleanse:Identifyingand mitigating backdoor attacks in neural networks[C]//2019 IEEE Symposium on Security and Privacy(SP).IEEE,2019:707-723. [17]GEIPING J,BAUERMEISTER H,DRÖGE H,et al.Inverting gradients-how easy is it to break privacy in federated learning?[J].Advances in Neural Information Processing Systems,2020,33:16937-16947. [18]ZHU L,HAN S.Deep leakage from gradients[C]//Advances in Neural Information Processing Systems 32:Annual Conference on Neural Information Processing Systems.New York:Curran Associates Inc,2019:14747-14756. [19]MCMAHAN B,MOORE E,RAMAGE D,et al.Communica-tion-efficient learning of deep networks from decentralized data[C]//Artificial Intelligence and Statistics.Florida:PMLR,2017:1273-1282. [20]FANG H,QIAN Q.Privacy preserving machine learning with homomorphic encryption and federated learning[J].Future Internet,2021,13(4):94. [21]LEYS C,LEY C,KLEIN O,et al.Detecting outliers:Do not use standard deviation around the mean,use absolute deviation around the median[J].Journal of Experimental Social psycho-logy,2013,49(4):764-766. [22]ZOPH B,LE Q V.Neural architecture search with reinforce-ment learning[J].arXiv:1611.01578,2016. [23]ESTER M,KRIEGEL H P,SANDER J,et al.A density-basedalgo-rithm for discovering clusters in large spatial databases with noise[C]//KDD.1996:226-231. [24]GARBER L.Denial-of-service attacks rip the Internet[J].Computer,2000,33(4):12-17. [25]FANG M,CAO X,JIA J,et al.Local model poisoning attacks to {Byzantine-Robust} federated learning[C]//29th USENIX security symposium(USENIX Security 20).2020:1605-1622. [26]XIE C,KOYEJO O,GUPTA I.Fall of empires:Breaking byz-antine-tolerant sgd by inner product manipulation[C]//Uncertainty in Artificial Intelligence.PMLR,2020:261-270. [27]BLANCHARD P,EL MHAMDI E M,GUERRAOUI R,et al.Machine learning with adversaries:Byzantine tolerant gradient descent[J].Advances in Neural Information Processing Systems,2017,30. [28]FUNG C,YOON C J M,BESCHASTNIKH I.The limitations of federated learning in sybil settings[C]//23rd International Symposium on Research in Attacks[C]//Intrusions and Defenses(RAID 2020).2020:301-316. [29]GU T,DOLAN-GAVITT B,GARG S.Badnets:Identifying vul-nerabilities in the machine learning model supply chain[J].ar-Xiv:1708.06733,2017. [30]KRIZHEVSKY A,HINTON G.Learning multiple layers of fea-tures from tiny images[J].Handbook of Systemic Autoim-mune Diseases,2009,1(4). |
|