计算机科学 ›› 2024, Vol. 51 ›› Issue (7): 405-412.doi: 10.11896/jsjkx.230500012
王静红1,2,3, 田长申1,2,3, 李昊康4, 王威1,3
WANG Jinghong1,2,3, TIAN Changshen1,2,3, LI Haokang4, WANG Wei1,3
摘要: 小样本学习旨在利用少量数据训练并大幅提升模型效用,为解决敏感数据在神经网络模型中的隐私与公平问题提供了重要方法。在小样本学习中,由于小样本数据集中往往包含某些敏感数据,并且这些敏感数据可能有歧视性,导致数据在神经网络模型的训练中存在隐私泄露的风险和公平性问题。此外,在许多领域中,由于隐私或安全等,数据很难或无法获取。同时在差分隐私模型中,噪声的引入不仅会导致模型效用的降低,也会引起模型公平性的失衡。针对这些挑战,提出了一种基于Rényi差分隐私过滤器的样本级自适应隐私过滤算法,利用Rényi差分隐私以实现对隐私损失的更精确计算。进一步,提出了一种基于拉格朗日对偶的隐私性和公平性约束算法,该算法通过引入拉格朗日方法,将差分隐私约束和公平性约束加到目标函数中,并引入拉格朗日乘子来平衡这些约束。利用拉格朗日乘子法将目标函数转化为对偶问题,从而实现同时优化隐私性和公平性,通过拉格朗日函数实现隐私性和公平性的平衡。实验结果证明,该方法既提升了模型性能,又保证了模型的隐私性和公平性。
中图分类号:
[1]CHANG H,SHOKRI R.On the privacy risks of algorithmicfairness[C]//2021 IEEE European Symposium on Security and Privacy(EuroS&P).IEEE,2021:292-303. [2]FARRAND T,MIRESHGHALLAH F,SINGH S,et al.Neither private nor fair:Impact of data imbalance on utility and fairness in differential privacy[C]//Proceedings of the 2020 Workshop on Privacy-preserving Machine Learning in Practice.Association for Computing Machinery,2020:15-19. [3]LIN Y,BAO L Y,LI Z M H,et al.Differential privacy protection over deep learning:An investigation of its impacted factors[J].Computers & Security,2020,99:102061. [4]GARCIA V,BRUNA J.Few-shot learning with graph neuralnetworks[J].arXiv:1711.04043,2017. [5]MIRONOV I.Rényi differential privacy[C]//2017 IEEE 30th Computer Security Foundations Symposium.IEEE,2017:263-275. [6]DWORK C.Differential privacy [C]//Proceedings of the 33rd International Colloquium on Automata,Languages and Programming(ICALP),Part II.2006:1-12. [7]ABADI M,CHU A,GOODFELLOW I,et al.Deep learning withdifferential privacy[C]//Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security.ACM,2016:308-318. [8]FELDMAN V,ZRNIC T.Individual privacy accounting via aRényi filter[C]//Advances in neural information processing systems.New York:Curran Associates,Inc.2021:28080-28091. [9]DU M N,YANG F,ZOU N,et al.Fairness in deep learning:a computational perspective[J].IEEE Intelligent Systems,2021,4(36):25-34. [10]ZHANG T,ZHU T,LI J,et al.Fairness in semi-supervisedlearning:Unlabeled data help to reduce discrimination[J].IEEE Transactions on Knowledge and Data Engineering,2020,34(4):1763-1774. [11]ZAFAR M B,VALERA I,ROGRIGUEZ M G,et al.Fairnessconstraints:Mechanisms for fair classification[C]//Artificial Intelligence and Statistics.PMLR,2017:962-970. [12]LIANG Y,CHEN C,TIAN T,et al.Joint Adversarial Learning for Cross-domain Fair Classification[J].arXiv:2206.03656,2022. [13]DU M,YANG F,ZOU N,et al.Fairness in deep learning:A computational perspective[J].IEEE Intelligent Systems,2020,36(4):25-34. [14]PENG Y C,QIN X I,ZHANG L G,et al.Survey on Few-shot Learning Algorithms for Image Classification[J].Computer Science,2022,49(5):1-9. [15]LU J Y,LING X H,LIU Q,et al.Meta-reinforcement Learning Algorithm Based on Automating Policy Entropy[J].Computer Science,2021,48(6):168-174. [16]VINYALS O,BLUNDELL C,LILLICRAP T,et al.Ma-tching networks for one shot learning[J].Advances in Neural Information Processing Systems,2016,29:3630-3638. [17]TRIANTAFILLOU E,ZHU T,DUMOULIN V,et al.Meta-dataset:A dataset of datasets for learning to learn from few examples[J].arXiv:1903.03096,2019. [18]TSIMPOUKELLI M,MENICK J L,CABI S,et al.Multimodal few-shot learning with frozen language models[J].Advances in Neural Information Processing Systems,2021,34:200-212. [19]SNELL J,SWERSKY K,ZEMEL R.Prototypical networks for few-shot learning[J].Advances in Neural Information Proces-sing Systems,2017,30:4077-4087. [20]RAVI S,LAROCHELLE H.Optimization as a model for few-shot learning[C]//International Conference on Learning Representations.2017. [21]JAMAL M A,QI G J.Task-agnostic meta-learning for few-shot learning[C]//Conference on Computer Vision and Pattern Recognition.IEEE,2019:11719-11727. [22]FINN C,ABBEEL P,LEVINE S.Model-agnostic meta-learning for fast adaptation of deep networks[C]//International Confe-rence on Machine Learning.PMLR,2017:1126-1135. [23]ABBAS M,XIAO Q,CHEN L,et al.Sharp-MAML:Sharpness-Aware Model-Agnostic Meta Learning[C]//International Conference on Machine Learning.PMLR,2022:10-32. [24]LI J,KHODAK M,CALDAS S,et al.Differentially private meta-learning[J].arXiv:1909.05830,2019. [25]SLACK D,FRIEDLER S A,GIVENTAL E.Fairness warnings and Fair-MAML:learning fairly with minimal data[C]//Proceedings of the 2020 Conference on Fairness,Accountability,and Transparency.ACM,2020:200-209. [26]ZHAO C,CHEN F,WANG Z,et al.A primal-dual subgradient approach for fair meta learning[C]//2020 IEEE International Conference on Data Mining(ICDM).IEEE,2020:821-830. [27]JAGIELSKI M,KEARNS M,MAO J M,et al.Differenially private fair learning[C]//International Conference on Machine Learning.PMLR,2019:3000-3008. [28]FIORETTO F,MAK T W K,VAN HENTENRYCK P.Predicting ac optimal power flows:Combining deep learning and lagrangian dual methods[C]//Proceedings of the AAAI Confe-rence on Artificial Intelligence.2020:630-637. [29]TRAN C,FIORETTO F,VAN HENTENRYCK P.Differen-tially private and fair deep learning:A lagrangian dual approach[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2021,35(11):9932-9939. [30]DU M,MUKHERJEE S,WANG G,et al.Fairness via representation neutralization[J].Advances in Neural Information Processing Systems,2021,34:12091-12103. [31]BECHAVOD Y,LIGETT K.Penalizing unfairness in binaryclassification[J].arXiv:1707.00044,2017. [32]XIE Y,WANG H,YU B,et al.Secure collaborative few-shot learning[J].Knowledge-Based Systems,2020,203:106157. |
|