计算机科学 ›› 2024, Vol. 51 ›› Issue (7): 405-412.doi: 10.11896/jsjkx.230500012

• 信息安全 • 上一篇    下一篇

基于拉格朗日对偶的小样本学习隐私保护和公平性约束方法

王静红1,2,3, 田长申1,2,3, 李昊康4, 王威1,3   

  1. 1 河北师范大学计算机与网络空间安全学院 石家庄 050024
    2 河北师范大学河北省网络与信息安全重点实验室 石家庄 050024
    3 河北师范大学供应链大数据分析与数据安全河北省工程研究中心 石家庄 050024
    4 河北工程技术学院人工智能与大数据学院 石家庄 050020
  • 收稿日期:2023-04-30 修回日期:2023-08-29 出版日期:2024-07-15 发布日期:2024-07-10
  • 通讯作者: 王威(wangwei2021@hebtu.edu.cn)
  • 作者简介:(wangjinghong@126.com)
  • 基金资助:
    河北省自然科学基金(F2021205014);河北省高等学校科学技术研究项目(ZD2022139);中央引导地方科技发展资金项目(226Z1808G);河北省归国人才资助项目(C20200340);河北师范大学博士基金项目(L2022B22)

Lagrangian Dual-based Privacy Protection and Fairness Constrained Method for Few-shot Learning

WANG Jinghong1,2,3, TIAN Changshen1,2,3, LI Haokang4, WANG Wei1,3   

  1. 1 College of Computer and Cyber Security,Hebei Normal University,Shijiazhuang 050024,China
    2 Hebei Key Laboratory of Network and Information Security,Hebei Normal University,Shijiazhuang 050024,China
    3 Hebei Provincial Engineering Research Center for Supply Chain Big Data Analytics & Security,Hebei Normal University,Shijiazhuang 050024,China
    4 Artificial Intelligence and Big Data College,Hebei University of Engineering Science,Shijiazhuang 050020,China
  • Received:2023-04-30 Revised:2023-08-29 Online:2024-07-15 Published:2024-07-10
  • About author:WANG Jinghong,born in 1967,Ph.D,professor,academic advisor,is a member of CCF(No.58341S).Her main research interests include machine lear-ning,data mining,and artificial intelligence.
    WANG Wei,born in 1982,Ph.D,asso-ciate professor,academic advisor,is a member of CCF(No.51382M).His main research interests include machine learning,knowledge representation,and virtual simulation.
  • Supported by:
    Natural Science Foundation of Hebei,China(F2021205014),Science and Technology Project of Hebei Education Department(ZD2022139),Central Guidance on Local Science and Technology Development Fund of Hebei Province(226Z1808G),Project Funded by the Introduction of Overseas Students in Hebei Province(C20200340) and Science Foundation of Hebei Normal University(L2022B22).

摘要: 小样本学习旨在利用少量数据训练并大幅提升模型效用,为解决敏感数据在神经网络模型中的隐私与公平问题提供了重要方法。在小样本学习中,由于小样本数据集中往往包含某些敏感数据,并且这些敏感数据可能有歧视性,导致数据在神经网络模型的训练中存在隐私泄露的风险和公平性问题。此外,在许多领域中,由于隐私或安全等,数据很难或无法获取。同时在差分隐私模型中,噪声的引入不仅会导致模型效用的降低,也会引起模型公平性的失衡。针对这些挑战,提出了一种基于Rényi差分隐私过滤器的样本级自适应隐私过滤算法,利用Rényi差分隐私以实现对隐私损失的更精确计算。进一步,提出了一种基于拉格朗日对偶的隐私性和公平性约束算法,该算法通过引入拉格朗日方法,将差分隐私约束和公平性约束加到目标函数中,并引入拉格朗日乘子来平衡这些约束。利用拉格朗日乘子法将目标函数转化为对偶问题,从而实现同时优化隐私性和公平性,通过拉格朗日函数实现隐私性和公平性的平衡。实验结果证明,该方法既提升了模型性能,又保证了模型的隐私性和公平性。

关键词: 小样本学习, 隐私与公平, Rényi差分隐私, 公平性约束, 拉格朗日对偶

Abstract: Few-shot learning aims to use a small amount of data for training and significantly improve model performance,and is an important approach to address privacy and fairness issues of sensitive data in neural network models.In few-shot learning,there is a risk of privacy and fairness issues in training neural network models due to the fact that small sample datasets often contain certain sensitive data,and that such sensitive data may be discriminatory.In addition,in many domains,data is difficult or impossible to access for reasons such as privacy or security.Also,in differential privacy models,the introduction of noise not only leads to a reduction in model utility,but also causes an imbalance in model fairness.To address these challenges,this paper proposes a sample-level adaptive privacy filtering algorithm based on the Rényi differential privacy filter to exploit Rényi differential privacy to achieve a more accurate calculation of privacy loss.Furthermore,it proposes a Lagrangian dual-based privacy and fairness constraint algorithm,which adds the differential privacy constraint and the fairness constraint to the objective function by introducing a Lagrangian method,and introduces a Lagrangian multiplier to balance these constraints.The Lagrangian multiplier method is used to transform the objective function into a pairwise problem,thus optimising both privacy and fairness,and achieving a balance between privacy and fairness through the Lagrangian function.It is shown that the proposed method improves the performance of the model while ensuring privacy and fairness of the model.

Key words: Few-shot learning, Privacy and fairness, Rényi differential privacy, Fairness constraint, Lagrangian dual

中图分类号: 

  • TP309.3
[1]CHANG H,SHOKRI R.On the privacy risks of algorithmicfairness[C]//2021 IEEE European Symposium on Security and Privacy(EuroS&P).IEEE,2021:292-303.
[2]FARRAND T,MIRESHGHALLAH F,SINGH S,et al.Neither private nor fair:Impact of data imbalance on utility and fairness in differential privacy[C]//Proceedings of the 2020 Workshop on Privacy-preserving Machine Learning in Practice.Association for Computing Machinery,2020:15-19.
[3]LIN Y,BAO L Y,LI Z M H,et al.Differential privacy protection over deep learning:An investigation of its impacted factors[J].Computers & Security,2020,99:102061.
[4]GARCIA V,BRUNA J.Few-shot learning with graph neuralnetworks[J].arXiv:1711.04043,2017.
[5]MIRONOV I.Rényi differential privacy[C]//2017 IEEE 30th Computer Security Foundations Symposium.IEEE,2017:263-275.
[6]DWORK C.Differential privacy [C]//Proceedings of the 33rd International Colloquium on Automata,Languages and Programming(ICALP),Part II.2006:1-12.
[7]ABADI M,CHU A,GOODFELLOW I,et al.Deep learning withdifferential privacy[C]//Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security.ACM,2016:308-318.
[8]FELDMAN V,ZRNIC T.Individual privacy accounting via aRényi filter[C]//Advances in neural information processing systems.New York:Curran Associates,Inc.2021:28080-28091.
[9]DU M N,YANG F,ZOU N,et al.Fairness in deep learning:a computational perspective[J].IEEE Intelligent Systems,2021,4(36):25-34.
[10]ZHANG T,ZHU T,LI J,et al.Fairness in semi-supervisedlearning:Unlabeled data help to reduce discrimination[J].IEEE Transactions on Knowledge and Data Engineering,2020,34(4):1763-1774.
[11]ZAFAR M B,VALERA I,ROGRIGUEZ M G,et al.Fairnessconstraints:Mechanisms for fair classification[C]//Artificial Intelligence and Statistics.PMLR,2017:962-970.
[12]LIANG Y,CHEN C,TIAN T,et al.Joint Adversarial Learning for Cross-domain Fair Classification[J].arXiv:2206.03656,2022.
[13]DU M,YANG F,ZOU N,et al.Fairness in deep learning:A computational perspective[J].IEEE Intelligent Systems,2020,36(4):25-34.
[14]PENG Y C,QIN X I,ZHANG L G,et al.Survey on Few-shot Learning Algorithms for Image Classification[J].Computer Science,2022,49(5):1-9.
[15]LU J Y,LING X H,LIU Q,et al.Meta-reinforcement Learning Algorithm Based on Automating Policy Entropy[J].Computer Science,2021,48(6):168-174.
[16]VINYALS O,BLUNDELL C,LILLICRAP T,et al.Ma-tching networks for one shot learning[J].Advances in Neural Information Processing Systems,2016,29:3630-3638.
[17]TRIANTAFILLOU E,ZHU T,DUMOULIN V,et al.Meta-dataset:A dataset of datasets for learning to learn from few examples[J].arXiv:1903.03096,2019.
[18]TSIMPOUKELLI M,MENICK J L,CABI S,et al.Multimodal few-shot learning with frozen language models[J].Advances in Neural Information Processing Systems,2021,34:200-212.
[19]SNELL J,SWERSKY K,ZEMEL R.Prototypical networks for few-shot learning[J].Advances in Neural Information Proces-sing Systems,2017,30:4077-4087.
[20]RAVI S,LAROCHELLE H.Optimization as a model for few-shot learning[C]//International Conference on Learning Representations.2017.
[21]JAMAL M A,QI G J.Task-agnostic meta-learning for few-shot learning[C]//Conference on Computer Vision and Pattern Recognition.IEEE,2019:11719-11727.
[22]FINN C,ABBEEL P,LEVINE S.Model-agnostic meta-learning for fast adaptation of deep networks[C]//International Confe-rence on Machine Learning.PMLR,2017:1126-1135.
[23]ABBAS M,XIAO Q,CHEN L,et al.Sharp-MAML:Sharpness-Aware Model-Agnostic Meta Learning[C]//International Conference on Machine Learning.PMLR,2022:10-32.
[24]LI J,KHODAK M,CALDAS S,et al.Differentially private meta-learning[J].arXiv:1909.05830,2019.
[25]SLACK D,FRIEDLER S A,GIVENTAL E.Fairness warnings and Fair-MAML:learning fairly with minimal data[C]//Proceedings of the 2020 Conference on Fairness,Accountability,and Transparency.ACM,2020:200-209.
[26]ZHAO C,CHEN F,WANG Z,et al.A primal-dual subgradient approach for fair meta learning[C]//2020 IEEE International Conference on Data Mining(ICDM).IEEE,2020:821-830.
[27]JAGIELSKI M,KEARNS M,MAO J M,et al.Differenially private fair learning[C]//International Conference on Machine Learning.PMLR,2019:3000-3008.
[28]FIORETTO F,MAK T W K,VAN HENTENRYCK P.Predicting ac optimal power flows:Combining deep learning and lagrangian dual methods[C]//Proceedings of the AAAI Confe-rence on Artificial Intelligence.2020:630-637.
[29]TRAN C,FIORETTO F,VAN HENTENRYCK P.Differen-tially private and fair deep learning:A lagrangian dual approach[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2021,35(11):9932-9939.
[30]DU M,MUKHERJEE S,WANG G,et al.Fairness via representation neutralization[J].Advances in Neural Information Processing Systems,2021,34:12091-12103.
[31]BECHAVOD Y,LIGETT K.Penalizing unfairness in binaryclassification[J].arXiv:1707.00044,2017.
[32]XIE Y,WANG H,YU B,et al.Secure collaborative few-shot learning[J].Knowledge-Based Systems,2020,203:106157.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!