计算机科学 ›› 2025, Vol. 52 ›› Issue (11A): 241200220-10.doi: 10.11896/jsjkx.241200220

• 信息安全 • 上一篇    下一篇

面向图垂直联邦学习的对抗攻击方法

柏杨, 陈晋音, 郑海斌, 郑雅羽   

  1. 浙江工业大学信息工程学院 杭州 310023
  • 出版日期:2025-11-15 发布日期:2025-11-10
  • 通讯作者: 陈晋音(chenjinyin@zjut.edu.cn)
  • 作者简介:211124030094@zjut.edu.cn
  • 基金资助:
    国家自然科学基金(62072406,62406286);浙江省自然科学基金(LDQ23F020001);浙江省重点研发计划(2022C01018);国家重点研发计划(2018AAA0100801)

Adversarial Attack on Vertical Graph Federated Learning

BAI Yang, CHEN Jinyin, ZHENG Haibin, ZHENG Yayu   

  1. College of Information Engineering,Zhejiang University of Technology,Hangzhou 310023,China
  • Online:2025-11-15 Published:2025-11-10
  • Supported by:
    National Natural Science Foundation of China(62072406,62406286),Zhejiang Provincial Natural Science Foundation(LDQ23F020001),Key R & D Projects in Zhejiang Province(2022C01018) and National Key R & D Projects of China(2018AAA0100801).

摘要: 图垂直联邦学习是一种结合图数据和垂直联邦学习的分布式机器学习方法,广泛应用于金融服务、医疗健康和社交网络等领域。该方法在保护隐私的同时,利用数据多样性显著提升模型性能。然而,研究表明图垂直联邦学习容易受到对抗攻击的威胁。现有的针对图神经网络的对抗攻击方法,如梯度最大化攻击、简化梯度攻击等方法在图垂直联邦框架中实施时仍然面临攻击成功率低、隐蔽性差、在防御情况下无法实施等问题。为应对这些挑战,提出了一种面向图垂直联邦的对抗攻击方法(Node and Feature Adversarial Attack,NFAttack),该方法分别设计了节点攻击策略与特征攻击策略,从不同维度实施高效攻击。首先,节点攻击策略基于度中心性指标评估节点的重要性,通过连接一定数量的虚假节点以形成虚假边,从而干扰高中心性节点。其次,特征攻击策略在节点特征中注入由随机噪声与梯度噪声构成的混合噪声,进而扰乱分类结果。最后,在6个数据集和3种图神经网络模型上进行实验,结果表明NFAttack的平均攻击成功率达到80%,比其他算法提高了约30%。此外,即使在多种联邦学习防御机制下,NFAttack仍展现出较强的攻击效果。

关键词: 垂直联邦学习, 图神经网络, 图数据, 节点分类, 对抗攻击

Abstract: Graph vertical federated learning(GVFL) is a distributed machine learning approach that integrates graph data with vertical federated learning,widely applied in fields such as financial services,healthcare,and social networks.This method not only preserves privacy but also leverages data diversity to significantly enhance model performance.However,studies indicate that GVFL is vulnerable to adversarial attacks.Existing adversarial attack methods targeting graph neural networks(GNN),such as Gradient Maximization Attack and Simplified Gradient Attack,still face challenges when applied in the GVFL framework.These challenges include low attack success rates,poor stealth,and inapplicability under defense conditions.To address these issues,this paper proposes a novel adversarial attack method for GVFL,termed Node and Feature Adversarial Attack(NFAttack).NFAttack designs node and feature attack strategies to conduct efficient attacks from multiple dimensions.The node attack strategy evaluates node importance using degree centrality metrics and disrupts high-centrality nodes by connecting a certain number of fake nodes to form adversarial edges.Meanwhile,the feature attack strategy introduces hybrid noise-composed of random noise and gradient noise-into node features,thereby affecting classification results.Experiments conducted on six datasets and three GNN models demonstrate that NFAttack achieves an average attack success rate of 80%,approximately 30% higher than other me-thods.Furthermore,NFAttack maintains strong attack performance even under various federated learning defense mechanisms.

Key words: Vertical federal learning, Graph neural network, Graph data, Node classification, Adversarial attack

中图分类号: 

  • TP387
[1]ZHANG C,XIE Y,BAI H,et al.A survey on federated learning[J].Knowledge-Based Systems,2021,216:106775.
[2]LIU P,XU X,WANG W.Threats,attacks and defenses to federated learning:issues,taxonomy and perspectives[J].Cybersecurity,2022,5(1):4.
[3]HENRIQUE B M,SOBREIRO V A,KIMURA H.Literaturereview:Machine learning techniques applied to financial market prediction[J].Expert Systems with Applications,2019,124:226-251.
[4]KONONENKO I.Machine learning for medical diagnosis:history,state of the art and perspective[J].Artificial Intelligence in Medicine,2001,23(1):89-109.
[5]CUMMINGS D,NASSAR M.Structured citation trend predic-tion using graph neural networks[C]//ICASSP 2020-2020 IEEE International Conference on Acoustics,Speech and Signal Processing(ICASSP).IEEE,2020:3897-3901.
[6]GAO C,WANG X,HE X,et al.Graph neural networks for recommender system[C]//Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining.2022:1623-1625.
[7]ZHANG X M,LIANG L,LIU L,et al.Graph neural networks and their current applications in bioinformatics[J].Frontiers in Genetics,2021,12:690049.
[8]LUAN H,TSAI C C.A review of using machine learning approaches for precision education[J].Educational Technology & Society,2021,24(1):250-266.
[9]YU B,MBO W,LV Y,et al.A survey on federated learning in data mining[J].Wiley Interdisciplinary Reviews:Data Mining and Knowledge Discovery,2022,12(1):1-20.
[10]HARD A,RAO K,MATHEWS R,et al.Federated learn-ing for mo-bile keyboard prediction[J].arXiv:1811.03604,2018.
[11]YANG Q,LIU Y,CHEN T,et al.Federated machine learning:Concept and applications[J].ACM Transactions on Intelligent Systems and Technology(TIST),2019,10(2):1-19.
[12]WU Z,PAN S,CHEN F,et al.A comprehensive survey ongraph neural networks[J].IEEE Transactions on Neural Networks and Learning Systems,2020,32(1):4-24.
[13]ZHAO T,JIN W,LIU Y,et al.Graph data augmentation for graph machine learning:A survey[J].arXiv:2202.08871,2022.
[14]KIPF T N,WELLING M.Semi-supervised classification withgra-ph convolutional networks[J].arXiv:1609.02907,2016.
[15]HAMILTON W,YING Z,LESKOVEC J.Inductive representation learning on large graphs[J].Advances in Neural Information Processing Systems,2017,30:1-11.
[16]VELIČKOVIC′ P,CUCURULL G,CASANOVA A,et al.Graph attention networks[J].arXiv:1710.10903,,2017.
[17]LI Y,CHENG M,HSIEH C J,et al.A review of adversarial attack and defense for classification methods[J].The American Statistician,2022,76(4):329-345.
[18]ZHANG T,LIAO B,YU J,et al.Benchmarking and Analysis for Graph Neural Network Node Classification Task[J].Computer Science,2024,51(4):132-150.
[19]DAI H,LI H,TIAN T,et al.Adversarial attack on graph structured data[C]//International Conference on Machine Learning.PMLR,2018:1115-1124.
[20]LI J,XIE T,CHEN L,et al.Adversarial attack on large scale graph[J].IEEE Transactions on Knowledge and Data Engineering,2021,35(1):82-95.
[21]ZÜGNER D,AKBARNEJAD A,GÜNNEMANN S.Adversarial attacks on neural networks for graph data[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.2018:2847-2856.
[22]SUN Y,WANG S,TANG X,et al.Node injection attacks on graphs via reinforcement learning[J].arXiv:1909.06543,2019.
[23]GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and harnessing adversarial examples[J].arXiv:1412.6572,2014.
[24]DWORK C,MCSHERRY F,NISSIM K,et al.Calibrating noise to sensitivity in private data analysis[C]//Proceedings of Theoryof Cryptography Conference.2006:265-284.
[25]ABADI M,CHU A,GOODFELLOW I,et al.Deep learning with differential privacy[C]//Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security.2016:308-318.
[26]WANG C,LIANG J,HUANG M,et al.Hybrid differentially private federated learning on vertically partitioned data[J].ar-Xiv:2009.02763,2020.
[27]YANG Z,COHEN W,SALAKHUDINOV R.Revisiting semi-supervised learning with graph embeddings[C]//International Conference on Machine Learning.PMLR,2016:40-48.
[28]SHCHUR O,MUMME M,BOJCHEVSKI A,et al.Pitfalls of graph neural network evaluation[J].arXiv:1811.05868,2018.
[29]SUN M,TANG J,LI H,et al.Data poisoning attack against unsupervised node embedding methods[J].arXiv:1810.12881,2018.
[30]WU H,WANG C,TYSHETSKIY Y,et al.Adversarial examples for graph data:deep insights into attack and defense[C]//Proceedings of the Twenty Eighth International Joint Confe-rence on Artificial Intelligence(IJCAI).2019:4816-4823.
[31]SUN M,DING X N,CHENG Q.Federated Learning Scheme Based on Differential Privacy[J].Computer Science,2024,51(S1):230600211-6.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!