Computer Science ›› 2022, Vol. 49 ›› Issue (9): 297-305.doi: 10.11896/jsjkx.210800108

• Information Security • Previous Articles     Next Articles

Federated Learning Scheme Based on Secure Multi-party Computation and Differential Privacy

TANG Ling-tao1, WANG Di1, ZHANG Lu-fei1, LIU Sheng-yun2   

  1. 1 State Key Laboratory of Mathematical Engineering and Advanced Computing,Wuxi,Jiangsu 214125,China
    2 School of Cyber Science and Engineering,Shanghai Jiao Tong University,Shanghai 200240,China
  • Received:2021-08-12 Revised:2022-02-27 Online:2022-09-15 Published:2022-09-09
  • About author:TANG Ling-tao,born in 1994,Ph.D candidate.His main research interests include information security and privacy-preserving machine learning.
    LIU Sheng-yun,born in 1985,Ph.D,associate professor.His main research interests include blockchain,secure multi-party computation,distributed storage system and federated learning.
  • Supported by:
    National Key Research and Development Program of China(2016YFB1000500) and National Science and Technology Major Project(2018ZX01028102).

Abstract: Federated learning provides a novel solution to collaborative learning among untrusted entities. Through a local-trai-ning-and-central-aggregation pattern,the federated learning algorithm trains a global model while protects local data privacy of each entity. However,recent studies show that local models uploaded by clients and global models produced by the server may still leak users' private information. Secure multi-party computation and differential privacy are two mainstream privacy-preserving techniques,which are used to protect the privacy of computation process and computation outputs respectively. There are few works that exploit the benefits of these two techniques at the same time. This paper proposes a privacy-preserving federated learning scheme for deep learning by combining secure multi-party computation and differential privacy. Clients add noise to local models,and secret share them to multiple servers. Servers aggregate these model shares by secure multi-party computation to obtain a private global model. The proposed scheme not only protects the privacy of local model updates uploaded by clients,but also prevents adversaries from inferring sensitive information from globally shared data such as aggregated models. The scheme also allows dropout of unstable clients and is compatible with complex aggregation functions. In addition,it can be naturally extended to the decentralized setting for real-world applications where no trusted centers exist. We implement our system in Python and Pytorch. Experiments validate that the proposed scheme achieves the same level of efficiency and accuracy as plaintext fede-rated learning.

Key words: Federated learning, Secure multi-party computation, Differential privacy, Privacy preserving, Deep learning

CLC Number: 

  • TP309
[1]MCMAHAN B,MOORE E,RAMAGE D,et al.Communication-efficient learning of deep networks from decentralized data[C]//Artificial Intelligence and Statistics.PMLR,2017:1273-1282.
[2]ZHU L,LIU Z,HAN S.Deep leakage from gradients[C]//Advances in Neural Information Processing Systems.2019:14747-14756.
[3]ZHAO B,MOPURI K R,BILEN H.idlg:Improved deep leakage from gradients[J].arXiv:2001.02610,2020.
[4]GEIPING J,BAUERMEISTER H,DRÖGEH,et al.Inverting Gradients How easy is it to break privacy in federated learning?[J].arXiv:2003.14053,2020.
[5]SHOKRI R,STRONATI M,SONG C,et al.Membership infe-rence attacks against machine learning models[C]//2017 IEEE Symposium on Security and Privacy.IEEE,2017:3-18.
[6]NASR M,SHOKRI R,HOUMANSADR A.Comprehensive privacy analysis of deep learning:Passive and active white-box inference attacks against centralized and federated learning[C]//2019 IEEE symposium on security and privacy(SP).IEEE,2019:739-753.
[7]YAO A C.Protocols for secure computations[C]//23rd Annual Symposium on Foundations of Computer Science.IEEE,1982:160-164.
[8]SHAMIR A.How to share a secret[J].Communications of the ACM,1979,22(11):612-613.
[9]CRAMER R,DAMGÅRD I,MAURERU.General secure multi-party computation from any linear secret-sharing scheme[C]//International Conference on the Theory and Applications of Cryptographic Techniques.Berlin:Springer,2000:316-334.
[10]DAMGÅRD I,FITZI M,KILTZ E,et al.Unconditionally secure constant-rounds multi-party computation for equality,comparison,bits and exponentiation[C]//Theory of Cryptography Conference.Berlin:Springer,2006:285-304.
[11]BENDLIN R,DAMGÅRD I,ORLANDI C,et al.Semi-hom-omorphic encryption and multiparty computation[C]//Annual International Conference on the Theory and Applications of Cryptographic Techniques.Berlin:Springer, 2011:169-188.
[12]DAMGÅRD I,PASTRO V,SMART N,et al.Multiparty computation from somewhat homomorphic encryption[C]//Annual Cryptology Conference.Berlin:Springer,2012:643-662.
[13]DWORK C,ROTH A.The algorithmic foundations of differential privacy[J].Foundations and Trends in Theoretical Compu-ter Science,2014,9(3/4):211-407.
[14]DWORK C,MCSHERRY F,NISSIM K,et al.Calibrating noise to sensitivity in private data analysis[C]//Theory of Cryptography Conference.Berlin:Springer,2006:265-284.
[15]MCSHERRY F,TALWAR K.Mechanism design via differential privacy[C]//48th Annual IEEE Symposium on Foundations of Computer Science.IEEE,2007:94-103.
[16]BONAWITZ K,IVANOV V,KREUTER B,et al.Practical secure aggregation for privacy-preserving machine learning[C]//Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security.2017:1175-1191.
[17]GEYER R C,KLEIN T,NABI M.Differentially private federated learning:A client level perspective[J].arXiv:1712.07557,2017.
[18]AGARWAL N,SURESH A T,YU F,et al.cpSGD:Communication-efficient and differentially-private distributed SGD[J].arXiv:1805.10559,2018.
[19]TRUEX S,BARACALDO N,ANWAR A,et al.A hybrid ap-proach to privacy-preserving federated learning[C]//Procee-dings of the 12th ACM Workshop on Artificial Intelligence and Security.2019:1-11.
[20]XU R,BARACALDO N,ZHOU Y,et al.Hybridalpha:An efficient approach for privacy-preserving federated learning[C]//Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security.2019:13-23.
[21]ABADI M,CHU A,GOODFELLOW I,et al.Deep learning with differential privacy[C]//Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security.2016:308-318.
[22]CATRINA O,SAXENA A.Secure computation with fixed-point numbers[C]//International Conference on Financial Cryptography and Data Security.Berlin:Springer,2010:35-50.
[23]KRIPS T,WILLEMSON J.Hybrid model of fixed and floating point numbers in secure multiparty computations[C]//International Conference on Information Security.Cham:Springer,2014:179-197.
[24]SO J,GÜLER B,AVESTIMEHR A S.Byzantine-resilient se-cure federated learning[J].arXiv:2007.11115,2020.
[25]ZHAO Y,LI M,LAI L,et al.Federated learning with non-iid data[J].arXiv:1806.00582,2018.
[26]LI T,SAHU A K,ZAHEER M,et al.Federated optimization in heterogeneous networks[J].arXiv:1812.06127,2018.
[1] LU Chen-yang, DENG Su, MA Wu-bin, WU Ya-hui, ZHOU Hao-hao. Federated Learning Based on Stratified Sampling Optimization for Heterogeneous Clients [J]. Computer Science, 2022, 49(9): 183-193.
[2] RAO Zhi-shuang, JIA Zhen, ZHANG Fan, LI Tian-rui. Key-Value Relational Memory Networks for Question Answering over Knowledge Graph [J]. Computer Science, 2022, 49(9): 202-207.
[3] XU Yong-xin, ZHAO Jun-feng, WANG Ya-sha, XIE Bing, YANG Kai. Temporal Knowledge Graph Representation Learning [J]. Computer Science, 2022, 49(9): 162-171.
[4] WANG Jian, PENG Yu-qi, ZHAO Yu-fei, YANG Jian. Survey of Social Network Public Opinion Information Extraction Based on Deep Learning [J]. Computer Science, 2022, 49(8): 279-293.
[5] HAO Zhi-rong, CHEN Long, HUANG Jia-cheng. Class Discriminative Universal Adversarial Attack for Text Classification [J]. Computer Science, 2022, 49(8): 323-329.
[6] JIANG Meng-han, LI Shao-mei, ZHENG Hong-hao, ZHANG Jian-peng. Rumor Detection Model Based on Improved Position Embedding [J]. Computer Science, 2022, 49(8): 330-335.
[7] SUN Qi, JI Gen-lin, ZHANG Jie. Non-local Attention Based Generative Adversarial Network for Video Abnormal Event Detection [J]. Computer Science, 2022, 49(8): 172-177.
[8] HU Yan-yu, ZHAO Long, DONG Xiang-jun. Two-stage Deep Feature Selection Extraction Algorithm for Cancer Classification [J]. Computer Science, 2022, 49(7): 73-78.
[9] CHENG Cheng, JIANG Ai-lian. Real-time Semantic Segmentation Method Based on Multi-path Feature Extraction [J]. Computer Science, 2022, 49(7): 120-126.
[10] HOU Yu-tao, ABULIZI Abudukelimu, ABUDUKELIMU Halidanmu. Advances in Chinese Pre-training Models [J]. Computer Science, 2022, 49(7): 148-163.
[11] ZHOU Hui, SHI Hao-chen, TU Yao-feng, HUANG Sheng-jun. Robust Deep Neural Network Learning Based on Active Sampling [J]. Computer Science, 2022, 49(7): 164-169.
[12] SU Dan-ning, CAO Gui-tao, WANG Yan-nan, WANG Hong, REN He. Survey of Deep Learning for Radar Emitter Identification Based on Small Sample [J]. Computer Science, 2022, 49(7): 226-235.
[13] CHEN Ming-xin, ZHANG Jun-bo, LI Tian-rui. Survey on Attacks and Defenses in Federated Learning [J]. Computer Science, 2022, 49(7): 310-323.
[14] HUANG Jue, ZHOU Chun-lai. Frequency Feature Extraction Based on Localized Differential Privacy [J]. Computer Science, 2022, 49(7): 350-356.
[15] LU Chen-yang, DENG Su, MA Wu-bin, WU Ya-hui, ZHOU Hao-hao. Clustered Federated Learning Methods Based on DBSCAN Clustering [J]. Computer Science, 2022, 49(6A): 232-237.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!