计算机科学 ›› 2024, Vol. 51 ›› Issue (1): 355-362.doi: 10.11896/jsjkx.230600127

• 信息安全 • 上一篇    下一篇

基于特征拓扑融合的黑盒图对抗攻击

郭宇星1, 姚凯旋1, 王智强1, 温亮亮1, 梁吉业1,2   

  1. 1 山西大学计算机与信息技术学院 太原030006
    2 计算智能与中文信息处理教育部重点实验室(山西大学) 太原030006
  • 收稿日期:2023-06-15 修回日期:2023-09-21 出版日期:2024-01-15 发布日期:2024-01-12
  • 通讯作者: 梁吉业(ljy@sxu.edu.cn)
  • 作者简介:(1135408932@qq.com)
  • 基金资助:
    国家自然科学基金(62272285,U21A20473)

Black-box Graph Adversarial Attacks Based on Topology and Feature Fusion

GUO Yuxing1, YAO Kaixuan1, WANG Zhiqiang1, WEN Liangliang1, LIANG Jiye1,2   

  1. 1 School of Computer and Information Technology,Shanxi University,Taiyuan 030006,China
    2 Key Laboratory of Computational Intelligence and Chinese Information Processing(Shanxi University),Taiyuan 030006,China
  • Received:2023-06-15 Revised:2023-09-21 Online:2024-01-15 Published:2024-01-12
  • About author:GUO Yuxing,born in 1998,postgra-duate.His main research interests include machine learning and data mi-ning.
    LIANG Jiye,born in 1962,Ph.D,professor,Ph.D supervisor,is a member of CCF(No.06906F).His main research interests include artificial intelligence and machine learning.
  • Supported by:
    National Natural Science Foundation of China(62272285,U21A20473).

摘要: 在大数据时代,数据之间的紧密关联性是普遍存在的,图数据分析挖掘已经成为大数据技术的重要发展趋势。近几年,图神经网络作为一种新型的图表示学习工具引起了学术界和工业界的广泛关注。目前图神经网络已经在很多实际应用中取得了巨大的成功。最近人工智能的安全性和可信性成为了人们关注的重点,很多工作主要针对图像等规则数据的深度学习对抗攻击。文中主要聚焦于图数据这种典型非欧氏结构的黑盒对抗攻击问题,在图神经网络模型信息(结构、参数)未知的情况下,对图数据进行非随机微小扰动,从而实现对模型的对抗攻击,模型性能随之下降。基于节点选择的对抗攻击策略是一类重要的黑盒图对抗攻击方法,但现有方法在选择对抗攻击节点时主要依靠节点的拓扑结构信息(如度信息)而未充分考虑节点的特征信息,文中面向引文网络提出了一种基于特征拓扑融合的黑盒图对抗攻击方法。所提方法在选择重要性节点的过程中将图节点特征信息和拓扑结构信息进行融合,使得选出的节点在特征和拓扑两方面对于图数据都是重要的,攻击者对挑选出的重要节点施加不易察觉的扰动后对图数据产生了较大影响,进而实现对图神经网络模型的攻击。在3个基准数据集上进行实验,结果表明,所提出的攻击策略在模型参数未知的情况下能显著降低模型性能,且攻击效果优于现有的方法。

关键词: 图神经网络, 黑盒对抗攻击, 信息熵, 节点重要性, 引文网络

Abstract: In the era of big data,the close relationship between data is widespread,graph data analysis and mining have become an important development trend of big data technology.In recent years,as a novel type of graph representation learning tool,graph neural networks(GNNs) have extensively attracted academic and industry attention.At present,GNNs have achieved great success in various real-world applications.Lately,many researchers believe that the security and confidence level of artificial intelligence is a vital point,a lot of work focuses on deep learning adversarial attacks on Euclidean structure data such as images now.This paper mainly focuses on the black-box adversarial attack problem of graph data,which is a typical non-European structure.When the graph neural network model information(structure and parameters) is unknown,the imperceptible non-random perturbation of graph data is carried out to realize the adversarial attack on the model,and the performance of the model decreases.Applying an imperceptible no-random perturbation to the graph structure or node attributes can easily fool GNNs.The method based on node-selected black-box adversarial attack is vital,but similar methods are only taking account of the topology information of nodes instead of fully considering the information of node features,so in this paper,we propose a black-box adversarial attack for graph neural network via topology and feature fusion on citation network.In the process of selecting important nodes,this method fuses the features information and topology information of graph nodes,so that the selected nodes are significant to the graph data in both features and topology.Attackers apply small perturbations on node attributes that nodes are selected by our method and this attack has a great impact on the model.Moreover,experiments on three classic datasets show that the proposed attack strategy can remarkably reduce the performance of the model without access to model parameters and is better than the baseline methods.

Key words: Graph neural networks, Black-box adversarial attack, Information entropy, Node importance, Citation network

中图分类号: 

  • TP391
[1]KIPF T N,WELLING M.Semi-supervised classification withgraph convolutional networks[C]//Proceedings of the 5th International Conference on Learning Representations.Openreview,2017.
[2]VELICKOVIC P,CUCURULL G,CASANOVA A,et al.Graphattention networks[C]//Proceedings of the 6th International Conference on Learning Representations.Openreview,2018.
[3]XUAN Q,WANG J H,ZHAO M H,et al.Subgraph networks with application to structural feature space expansion [J].IEEE Transactions on Knowledge and Data Engineering,2021,33(6):2776-2789.
[4]ZHU Z C,ZHANG Z B,XHONNEUX L P,et al.Neural bellman-ford networks:a general graph neural network framework for link prediction[C]//Proceedings of 35th Conference and Workshop on International Conference on Machine Learning.New York,NY:ACM,2021:29476-29490.
[5]SCARSELLI F,GORI M,TSOI A C,et al.The graph neural network model [J].IEEE Transactions on Neural Networks,2009,20(1):61-80.
[6]BRUNA J,ZAREMBA W,SZLAM A,et al.Spectral networksand deep locally connected networks on graphs[C]//Procee-dings of the 1st International Conference on Learning Representations.Openreview,2014.
[7]DEFFERRARD M,BRESSON X,VANDERGHEYNST P.Convolutional neural networks on graphs with fast localized spectral filtering[C]//Proceedings of 30th Conference and Workshop on Neural Information Processing Systems.New York,NY:Curran Associates,2016:3837-3845.
[8]GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining andharnessing adversarial examples[C]//Proceedings of the 3rd International Conference on Learning Representations.Openreview,2015.
[9]SZEGEDY C,ZAREMBA W,SUTSKEVER I,et al.Intriguingproperties of neural networks[C]//Proceedings of the 1st International Conference on Learning Representations.Openreview,2014.
[10]MADRY A,MAKELOV A,SCHMIDT L,et al.Towards deep learning models resistant to adversarial attacks[C]//Procee-dings of the 6th International Conference on Learning Representations.Openreview,2018.
[11]BRENDEL W,RAUBER J,BETHGE M.Decision-based adversarial attacks:Reliable attacks against black-box machine lear-ning models[C]//Proceedings of the 6th International Confe-rence on Learning Representations.Openreview,2018.
[12]CHENG M H,LE T,CHEN P Y,et al.Query-efficient hard-label black-box attack:An optimization-based approach[C]//Proceedings of the 6th International Conference on Learning Representations.Openreview,2018.
[13]CHEN P Y,ZHANG H,SHARMA Y,et al.Zoo:Zeroth order optimization based black-box attacks to deep neural networks without training substitute models[C]//Proceedings of 10th ACM Workshop on Artificial Intelligence and Security.New York,NY:ACM,2017:15-26.
[14]ILYAS A,ENGSTROM L,ATHALYE A,et al.Black-box Adversarial Attacks with Limited Queries and Information[C]//Proceedings of the 6th International Conference on Learning Representations.Openreview,2018:2142-2151.
[15]LIU H,ZHANG Z H,XIA X F,et al.A Fast Black Box Boundary Attack Algorithm Based on Geometric Detection[J].Journal of Computer Research and Development,2023,60(2):435-447.
[16]CHEN J Y,CHEN Z Q,ZHENG H B,et al.Black-box physical attack against road sign recognition model via PSO [J].Ruan Jian Xue Bao/Journal of Software,2020,31(9):2785-2801.
[17]MA J Q,DING S R,MEI Q Z.Towards more practical adversa-rial attacks on graph neural networks[C]//Proceedings of 34th Conference and Workshop on Neural Information Processing Systems.Massachusetts,MA:MIT Press,2020:3837-3845.
[18]MA J Q,DENG J W,MEI Q Z.Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem[C]//Proceedings of the 15th ACM International Conference on Web Search and Data Mining.New York,NY:ACM,2022:675-685.
[19]SUN L C,DOU Y T,YANG C,et al.Adversarial Attack and Defense on Graph Data:A Survey [J].IEEE Transactions on Knowledge and Data Engineering,2023,35(8):7693-7711.
[20]DAI H J,LI H,TIAN T,et al.Adversarial attack on graph structured data[C]//Proceedings of the 35th International Conference on Machine Learning.New York,NY:ACM,2018:1115-1124.
[21]ZÜGNER D,AKBARNEJAD A,GÜNNEMANN S.Adversarial attacks on neural networks for graph data[C]//Proceedings of 28th International Joint Conference on Artificial Intelligence Best Sister Conferences.San Francisco,CA:Margan Kaufmann,2019:6246-6250.
[22]TANG H T,MA G X,CHEN Y R,et al.Adversarial attack on hierarchical graph pooling neural networks[J/OL].https://arxiv.org/abs/2005.11560.
[23]SUN Y W,WANG S H,TANG X F,et al.Node Injection Attacks on Graphs via Reinforcement Learning[J/OL].https://arxiv.org/abs/1909.06543.
[24]ZÜGNER D,GÜNNEMANN S.Adversarial Attacks on Graph Neural Networks via Meta Learning[C]//Proceedings of the 7th International Conference on Learning Representations.Openreview,2019.
[25]CHANG H,RONG Y,XU T Y,et al.A restricted black-box adversarial framework towards attacking graph embedding models[C]//Proceedings of the AAAI Conference on Artificial Intelligence.Menlo Park,CA:AAAI,2020:3389-3396.
[26]CHEN J Y,WU Y Y,XU X H,et al.Fast Gradient Attack on Network Embedding[J].arXiv:1809.02797,2018.
[27]CHEN J Y,CHEN Y X,ZHENG H B,et al.MGA:Momentum Gradient Attack on Network [J].IEEE Transactions on Computational Social Systems,2020,8(1):99-109.
[28]LIU S H,CAO H Y.The Self-Information Weighting-BasedNode Importance Ranking Method for Graph Data [J].Entropy,2022,24(10):1471.
[29]GRAY R M.Entropy and Information Theory[M].Berlin:Springer,2011.
[30]ZAREIE A,SHEIKHAHMADI A,FATEMI A.Influentialnodes ranking in complex networks:An entropy-based approach [J].Chaos,Solitons & Fractals,2017,104:485-494.
[31]YANG Z L,COHEN W W,SALAKHUTDINO-V R.Revisiting Semi-Supervised Learning with Graph Embeddings[C]//Proceedings of the 33rd International Conference on Machine Learning.New York,NY:ACM,2016:40-48.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!