计算机科学 ›› 2024, Vol. 51 ›› Issue (4): 373-380.doi: 10.11896/jsjkx.230100024

• 信息安全 • 上一篇    下一篇

基于多路冗余神经元的主动成员推理攻击方法

汪德刚, 孙奕, 高琦   

  1. 信息工程大学密码工程学院 郑州450001
  • 收稿日期:2023-01-04 修回日期:2023-05-04 出版日期:2024-04-15 发布日期:2024-04-10
  • 通讯作者: 孙奕(11112072@bjtu.edu.cn)
  • 作者简介:(wangpei_hubu@163.com)

Active Membership Inference Attack Method Based on Multiple Redundant Neurons

WANG Degang, SUN Yi, GAO Qi   

  1. School of Cryptographic Engineering,Information Engineering University,Zhengzhou 450001,China
  • Received:2023-01-04 Revised:2023-05-04 Online:2024-04-15 Published:2024-04-10

摘要: 联邦学习通过交换模型参数或梯度信息来提供对原始数据的隐私保障,但其仍然存在隐私泄露的问题,如成员推理攻击旨在推断目标数据样本是否被用于联邦学习中训练机器学习模型。针对联邦学习中现有基于模型参数构造的主动成员推理攻击对随机失活等操作鲁棒性较差的问题,提出了一种基于多路冗余神经元的主动成员推理攻击方法,利用ReLU激活函数输入为负、输出为0的特性,根据待推理目标数据构造模型参数,通过观察成员数据与非成员数据在模型参数更新上的差异进行成员推断,并利用模型神经元的冗余特性构建多个通路实现对随机失活的鲁棒性。在MNIST,CIFAR10以及CIFAR100数据集上的实验证明了该方法的有效性,在引入随机失活的情况下,所提方法仍然能够达到100%的准确率。

关键词: 联邦学习, 机器学习模型, 多路冗余神经元, 主动成员推理攻击

Abstract: Federated learning provides privacy protection for source data by exchanging model parameters or gradients.However,it still faces the problem of privacy disclosure.For example,membership inference attack can infer whether the target data samples are used to train machine learning models in federated learning.Aiming at the problem that the existing active membership inference attack based on model parameter construction in federated learning are less robust to dropout operations,an active membership inference attack method is proposed.This method makes use of the characteristic that the input of ReLU activation function is negative and the output is zero,constructs model parameters according to the target data,and inferences membership through the difference between member data and non-member data in updating model parameters.The redundancy of model neurons is used to construct multiple paths to achieve robustness to dropout.Experiments on MNIST,CIFAR10 and CIFAR100 datasets proves the effectiveness of our method.When dropout is used in model training,the proposed method can still achieve an accuracy of 100%.

Key words: Federated learning, Machine learning model, multiple redundant neurons, Active membership inference attack

中图分类号: 

  • TP309
[1]MCMAHAN B,MOORE E,RAMAGE D,et al.Communication-efficient learning of deep networks from decentralized data[C]//Artificial Intelligence and Statistics.PMLR,2017:1273-1282.
[2]MELIS L,SONG C,DE CRISTOFARO E,et al.Exploiting unintended feature leakage in collaborative learning[C]//2019 IEEE Symposium on Security and Privacy(SP).IEEE,2019:691-706.
[3]NASR M,SHOKRI R,HOUMANSADR A.Comprehensive privacy analysis of deep learning:Passive and active white-box inference attacks against centralized and federated learning[C]//2019 IEEE Symposium on Security and Privacy(SP).IEEE,2019:739-753.
[4]PICHLER G,ROMANELLI M,VEGA L R,et al.Perfectly Accurate Membership Inference by a Dishonest Central Server in Federated Learning[J].arXiv:2203.16463,2022.
[5]HU H,SALCIC Z,SUN L,et al.Membership inference attacks on machine learning:A survey[J].ACM Computing Surveys(CSUR),2022,54(11s):1-37.
[6]SHOKRI R,STRONATI M,SONG C,et al.Membership Infe-rence Attacks Against Machine Learning Models[C]//2017 IEEE Symposium on Security and Privacy(SP):IEEE,2017:3-18.
[7]SALEM A,ZHANG Y,HUMBERT M,et al.Ml-leaks:Model and data independent membership inference attacks and defenses on machine learning models[J].arXiv:1806.01246,2018.
[8]YEOM S,GIACOMELLI I,FREDRIKSON M,et al.Privacyrisk in machine learning:Analyzing the connection to overfitting[C]//2018 IEEE 31st Computer Security Foundations Sympo-sium(CSF).IEEE,2018:268-282.
[9]SONG L,MITTAL P.Systematic evaluation of privacy risks of machine learning models[C]//30th USENIX Security Sympo-sium(USENIX Security 21).2021.
[10]LI Z,ZHANG Y.Membership leakage in label-only exposures[C]//Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security.2021:880-895.
[11]CHOQUETTE-CHOO C A,TRAMER F,CARLINI N,et al.Label-only membership inference attacks[C]//International Conference on Machine Learning.PMLR,2021:1964-1974.
[12]RAHIMIAN S,OREKONDY T,FRITZ M.Differential privacy defenses and sampling attacks for membership inference[C]//Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security.2021:193-202.
[13]LEINO K,FREDRIKSON M.Stolen memories:Leveraging model memorization for calibrated white-box membership inference[C]//29th USENIX Security Symposium(USENIX Security 20).2020:1605-1622.
[14]CHEN D,YU N,ZHANG Y,et al.GAN-Leaks:A Taxonomy of Membership Inference Attacks against Generative Models[C]//Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security(CCS '20).Association for Computing Machinery,343-362
[15]HAYES J,MELIS L,DANEZIS G,et al.Logan:Membership inference attacks against generative models[C]//Proceedings on Privacy Enhancing Technologies(PoPETs).2019:133-152.
[16]KAYA Y,HONG S,DUMITRAS T.On the effectiveness ofregularization against membership inference attacks[J].arXiv:2006.05336,2020.
[17]HOUMANSADR V S A.Membership Privacy for MachineLearning Models Through Knowledge Transfer[C]//Procee-dings of the AAAI Conference on Artificial Intelligence.2021,35(11):9549-9557.
[18]NASR M,SHOKRI R,HOUMANSADR A.Machine learningwith membership privacy using adversarial regularization[C]//Proceedings of the 2018 ACM SIGSAC Conference on Compu-ter and Communications Security.2018:634-646.
[19]LI J,LI N,RIBEIRO B.Membership Inference Attacks and Defenses in Classification Models[C]//Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy(CODASPY '21).2020:5-16.
[20]DWORK C,MCSHERRY F,NISSIM K,et al.Calibrating noise to sensitivity in private data analysis[C]//Theory of Cryptography Conference.Springer,2006:265-284.
[21]RAHMAN M A,RAHMAN T,LAGANIÈRE R,et al.Membership Inference Attack against Differentially Private Deep Learning Model[J].Trans.Data Priv.,2018,11(1):61-79.
[22]JAYARAMAN B,EVANS D.Evaluating differentially private machine learning in practice[C]//28th USENIX Security Symposium(USENIX Security 19).2019:1895-1912.
[23]RAHIMIAN S,OREKONDY T,FRITZ M.Sampling attacks:Amplification of membership inference attacks by repeated queries[J].arXiv:2009.00395,2020.
[24]ABADI M,CHU A,GOODFELLOW I,et al.Deep learning with differential privacy[C]//Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security.2016:308-318.
[25]BA J,CARUANA R.Do deep nets really need to be deep?[J].Advances Neural Information Processing Systems,2014,3:2654-2662.
[26]ZHENG J,CAO Y,WANG H.Resisting membership inference attacks through knowledge distillation[J].Neurocomputing,2021,452:114-126.
[27]BEUTEL D J,TOPAL T,MATHUR A,et al.Flower:A frien-dly federated learning research framework[J].arXiv:2007.14390,2020.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!