计算机科学 ›› 2026, Vol. 53 ›› Issue (3): 1-22.doi: 10.11896/jsjkx.250700093

• 基于AGI技术的智能信息系统 • 上一篇    下一篇

图神经网络后门攻击与防御综述

丁艳1, 丁红发1,2, 喻沐然1, 蒋合领1   

  1. 1 贵州财经大学信息学院 贵阳 550025
    2 贵州省主权区块链全省重点实验室(贵州财经大学) 贵阳 550025
  • 收稿日期:2025-07-15 修回日期:2025-10-28 发布日期:2026-03-12
  • 通讯作者: 丁红发(hfding@mail.gufe.edu.cn)
  • 作者简介:(dyan.1203@foxmail.com)
  • 基金资助:
    国家自然科学基金(62566006);贵州财经大学在校学生科研项目(2025BAZYSY038)

Survey of Backdoor Attacks and Defenses on Graph Neural Network

DING Yan1, DING Hongfa1,2, YU Muran1, JIANG Heling1   

  1. 1 School of Information, Guizhou University of Finance and Economics, Guiyang 550025, China
    2 Guizhou Province Key Laboratory of Sovereign Blockchain, Guizhou University of Finance and Economics, Guiyang 550025, China
  • Received:2025-07-15 Revised:2025-10-28 Online:2026-03-12
  • About author:DING Yan,born in 2001,postgraduate,is a member of CCF(No.Y5913G).Her main research interests include data security,graph backdoor and detection.
    DING Hongfa,born in 1988,Ph.D,associate professor,master’s supervisor,is a member of CCF(No.36866M).His main research interests include data security and privacy protection,cryptographic algorithm and protocol design.
  • Supported by:
    National Natural Science Foundation of China(62566006) and Student Research Project of Guizhou University of Finance and Economics(2025BAZYSY038).

摘要: 在人工智能技术驱动的智能信息系统中,图神经网络(GNN)因其强大的图结构建模能力,被广泛应用于社交网络分析和金融风控等关键场景的知识发现与决策支持。然而,此类系统高度依赖第三方数据与模型,使GNN面临隐蔽的后门攻击威胁。攻击者通过注入后门触发器或篡改模型,可诱导系统对含特定模式的输入产生预设错误输出,进而破坏智能信息服务的可信性与可靠性。为保障智能信息系统的安全可控,从数据和模型两个层面对GNN后门攻击与防御研究进行了系统性综述。首先,深入分析了GNN在数据集收集、模型训练和部署阶段面临的后门攻击风险,构建了清晰的GNN后门攻防模型。其次,依据GNN后门攻击的实施阶段和攻击者能力,将后门攻击分为包含了6种面向数据的攻击和2种面向模型的攻击;依据防御实施阶段和防御者能力,将GNN后门防御方法分为面向数据、面向模型和面向鲁棒训练的防御;对各类方法的核心原理、技术特点进行了详细对比分析,阐释了其优缺点。最后,总结了当前研究面临的主要挑战,并展望了未来研究方向。提出的后门攻防模型和分类体系,有助于深入理解智能信息系统中的GNN后门安全威胁的本质及技术演进,推动下一代可信智能信息系统的安全设计与实践。

关键词: 图神经网络, 后门攻击, 后门防御, 后门触发器, 数据隐私与安全, 智能信息系统

Abstract: In artificial intelligence(AI)-driven intelligent information systems,GNNs are extensively utilized for knowledge discovery and decision support in critical domains including social network analysis and financial risk control,leveraging their superior graph-structured data modeling capabilities.However,the heavy reliance of such systems on third-party data and models exposes GNNs to stealthy backdoor attacks.Attackers can inject backdoor triggers or tamper with models to induce predetermined erroneous outputs for inputs containing specific patterns,thereby undermining the trustworthiness and reliability of intelligent information services.To ensure the security and controllability of intelligent information systems,this paper systematically reviews research on GNN backdoor attacks and defenses through dual data-model perspectives.It firstly conducts in-depth analysis of attack vectors during data collection,model training,and deployment phases,establishing a comprehensive attack-defense framework.It subsequently categorizes attacks into six data-oriented and two model-oriented types based on implementation stages and attacker capabilities,classifies defenses into data-oriented,model-oriented,and robust training-oriented approaches according to deployment stages and defender capacities,with detailed comparative examination of their core mechanisms,technical features,advantages,and limitations.Finally,it summarizes current research challenges while outlining future directions.The proposed attack-defense taxonomy facilitates profound understanding of GNN backdoor threat evolution and advances security design for next-generation trustworthy intelligent information systems.

Key words: Graph neural network, Backdoor attack, Backdoor defense, Backdoor trigger, Data privacy and security, Intelligent information systems

中图分类号: 

  • TP309.2
[1]FANW Q,MAY,LI Q,et al.Graph Neural Networks for Social Recommendation[C]//Proceedings of WWW 2019.New York:ACM,2019:417-426.
[2]PEI Y L,CHAKRABORTY N,SYCARA K.Nonnegative Matrix Tri-factorization with Graph Regularization for Community Detection in Social Networks[C]//Proceedings of IJCAI 2015.Menlo Park,CA:AAAI,2015:2083-2089.
[3]LUO Y K,SHI L,WU X M.Unlocking the Potential of Classic GNNs for Graph-level Tasks:sSimple Architectures Meet Excellence[J].CoRR:abs/2502.09263,2025.
[4]MANSIMOV E,MAHMOOD O,KANG S,et al.MolecularGeometry Prediction Using a Deep Generative Graph Neural Network[J].Scientific Reports,2019,9(1):20381.
[5]HAMILTON W L,YING R,LESKOVEC J.Inductive Representation Learning on Large Graphs[C]//Advances in Neural Information Processing Systems.2017.
[6]XIONG J C,XIONG Z P,CHEN K X,et al.Graph Neural Networks for Automated de novo Drug Design[J].Drug Discovery Today,2021,26(6):1382-1393.
[7]ABDALLAH H,AFANDI W,KALNIS P,et al.Task-Oriented GNNs Training on Large Knowledge Graphs for Accurate and Efficient Modeling[C]//Proceedings of 2024 IEEE 40th ICDE.Piscataway,NJ:IEEE,2024:1833-1846.
[8]ZHANG W,CHEN X N,YAO Z,et al.NeuralKG:An OpenSource Library for Diverse Representation Learning of Know-ledge Graphs[C]//Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval.New York:ACM,2022:3323-3328.
[9]HOGAN A,BLOMQVIST E,COCHEZ M,et al.KnowledgeGraphs[J].ACM Computing Surveys,2021,54(4):1-37.
[10]XI N,ZHANG Y C,FENG P B,et al.GNNDroid:Graph-Learning Based Malware Detection for Android Apps With Native Code[J].IEEE Transactions on Dependable and Secure Computing,2025,22(2):1460-1476.
[11]GU J T,ZHU H L,HAN Z W,et al.GSEDroid:GNN-based android Malware Detection Framework Using Lightweight Semantic Embedding[J].Computers& Security,2024,140:103807.
[12]XU J,ABAD G,PICEK S.Rethinking the Trigger-Injecting Position in Graph Backdoor Attack[C]//Proceedings of IJCNN 2023.Piscataway,NJ:IEEE,2023:1-8.
[13]GUAN Z H,DU M N,LIU N H.XGBD:Explanation-guidedGraph Backdoor Detection[C]//Proceedings of ECAI 2023.Amsterdam,Netherlands:IOS Press,2023:932-939.
[14]WANG X T,YIN J,LIU C G,et al.ASurvey of Backdoor Attacks and Defenses on Neural Networks[J].Chinese Journal of Computers,2024,47(8):1713-1743.
[15]LI Y D,ZHANG S G,WANG W P,et al.Promoting the Sustainability of Blockchain in Web 3.0 and the Metaverse Through Diversified Incentive Mechanism Design[J].IEEE Open Journal of the Computer Society,2023,4:134-146.
[16]DU W,LIU G S.A Survey of Backdoor Attack in Deep Learning[J].Journal of Cyber Security,2022,7(3):1-16.
[17]ZHENG M Y,LIN Z,LIU Z X,et al.Survey of Textual Backdoor Attack and Defense[J].Journal of Computer Research and Development,2024,61(1):221-242.
[18]GAO M N,CHEN W,WU L F,et al.Survey on Backdoor Attacks and Defenses for Deep Learning Research[J].Journal of Software,2025,36(7):3271-3305.
[19]CHENG P Z,WU Z R,DU W,et al.Backdoor Attacks andCountermeasures in Natural Language Processing Models:A Comprehensive Security Review[J].CoRR:abs/2309.06055,2023.
[20]YAN B C,LAN J H,YAN Z.Backdoor Attacks Against Voice Recognition Systems:A Survey[J].ACM Computing Surveys,2024,57(3):1-35.
[21]ZHANG Z X,JIA J Y,WANG B H,et al.Backdoor Attacks to Graph Neural Networks[C]//Proceedings of SACMAT 2021.New York:ACM,2021:15-26.
[22]YANG X,LI G L,ZHOU K,et al.Exploring Graph NeuralBackdoors in Vehicular Networks:Fundamentals,Methodologies,Applications,and Future Perspectives[J].IEEE Open Journal of Vehicular Technology,2025,6:1051-1071.
[23]LIU X Y,CHEN J,WEN Q.A Survey on Graph Classification and Link Prediction Based on GNN[J].CoRR:abs/2307.00865,2023.
[24]ZÜGNER D,AKBARNEJAD A,GÜNNEMANN S.Adversarial Attacks on Neural Networks for Graph Data[C]//Proceedings of IJCAI 2019.San Francisco,CA:IJCAI.org,2019:6246-6250.
[25]XIA H,ZHAO X W,ZHANG R,et al.Clean-label Graph Backdoor Attack in the Node Classification Task[C]//Proceedings of AAAI 2025.Menlo Park,CA:AAAI,2025:21626-21634.
[26]KANG C Z,ZHANG H,LIU Z,et al.LR-GNN:AGraph Neural Network Based on Link Representation for Predicting Molecular Associations[J].Briefings Bioinform,2022,23(1):bbab513.
[27]KHODABANDEH G,EZAZ A,BABAEI M,et al.UtilizingGraph Neural Networks for Effective Link Prediction in Microservice Architectures[C]//Proceedings of ICPE 2025.New York:ACM,2025:19-30.
[28]ZHANG Y X,LIU X,WU M,et al.Disttack:Graph Adversarial Attacks Toward Distributed GNN Training[C]//Proceedings of the 30th European Conference on Parallel and Distributed Processing.Berlin:Springer,2024:302-316.
[29]LIU J,HE Z H,MIAO Y H.Causality-based Adversarial At-tacks for Robust GNN Modelling with Application in Fault Detection[J].Reliability Engineering & System Safety,2024,252:110464.
[30]DONG X W,LI J C,LI S J,et al.Adaptive Backdoor Attacks with Reasonable Constraints on Graph Neural Networks[J].IEEE Transactions on Dependable and Secure Computing,2025,22(4):4053-4069.
[31]DING Y H,LIU Y,JI Y G,et al.SPEAR:AStructure-Preserving Manipulation Method for Graph Backdoor Attacks[C]//Proceedings of WWW 2025.New York:ACM,2025:1237-1247.
[32]HE X L,WEN R,WU Y X,et al.Node-Level Membership Inference Attacks Against Graph Neural Networks[J].CoRR:abs/2102.05429,2021.
[33]WU B,YANG X W,PAN S R,et al.Adapting Membership Inference Attacks to GNN for Graph Classification:Approaches and Implications[C]//Proceedings of IEEE ICDM 2021.Pisca-taway,NJ,:IEEE,2021:1421-1426.
[34]CONTI M,LI J X,PICEK S,et al.Label-only Membership Inference Attack Against Aode-level Graph Neural Networks[C]//Proceedings of ACM AISec 2022.New York:ACM,2022:1-12.
[35]PODHAJSKI M,DUBINSKI J,BOENISCH F,et al.EfficientModel-stealing Attacks Against Inductive Graph Neural Networks[C]//Proceedings of ECAI 2024.Amsterdam,Netherlands:IOS Press,2024:1438-1445.
[36]SHEN Y,HE X L,HAN Y F,et al.Model Stealing Attacks Against Inductive Graph Neural Networks[C]//Proceedings of IEEE SP 2022.Piscataway,NJ:IEEE,2022:1175-1192.
[37]DAI E Y,LIN M H,ZHANG X,et al.Unnoticeable Backdoor Attacks on Graph Neural Networks[C]//Proceedings of ACM WWW 2023.New York:ACM,2023:2263-2273.
[38]CHEN Y,YE Z L,ZHAO H X,et al.Feature-based GraphBackdoor Attack in the Node cClassification Task[J].International Journal of Intelligent Systems,2023,2023:1-13.
[39]DAI J Z,SUN H Y.Effective Backdoor Attack on Graph Neural Networks in Link Prediction Tasks[J].CoRR:abs/2401.02663,2025.
[40]WANG B H,GONG N Z Q.Attacking Graph-based Classification via Manipulating the Graph Structure[C]//Proceedings of ACM CCS 2019.New York:ACM,2019:2023-2040.
[41]HAN Y W,LAI Y N,ZHU Y L,et al.Cost Aware Untargeted Poisoning Attack Against Graph Neural Networks[C]//Proceedings of IEEE ICASSP.Piscataway,NJ:IEEE,2024:4940-4944.
[42]BOJCHEVSKI A,GÜNNEMANN S.Adversarial Attacks onNode Embeddings via Graph Poisoning[C]//Proceedings of ICML 2019.New York:PMLR,2019:695-704.
[43]XI Z H,PANG R,JI S L,et al.Graph Backdoor[C]//Procee-dings of USENIX Security 2021.Berkeley,CA:USENIX Association,2021:1523-1540.
[44]YANG S Q,DOAN B G,MONTAGUE P,et al.Transferable Graph Backdoor Attack[C]//Proceedings RAID 2022.New York:ACM,2022:321-332.
[45]WANG K Y,DENG H X,XU Y J,et al.Multi-Target Label Backdoor Attacks on Graph Neural Networks[J].Pattern Recogniton,2024,152:110449.
[46]GU T Y,LIU K,DOLAN-GAVITT B,et al.BadNets:Evaluating Backdooring Attacks on Deep Neural Networks[J].IEEE Access,2019,7:47230-47244.
[47]KURITA K,MICHEL P,NEUBIG G.Weight Poisoning At-tacks on Pre-trained Models[J].CoRR:abs/2004.06660,2020.
[48]SAHA A,SUBRAMANYA A,PIRSIAVASH H.Hidden Trigger Backdoor Attacks[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence.Menlo Park,CA:AAAI,2020:11957-11965.
[49]XU J.Connecting the dots:Exploring Backdoor Attacks onGraph Neural Networks[D].Delft:Delft University of Techno-logy,2025.
[50]HUANG Q,YAMADA M,TIAN Y,et al.GraphLIME:Local Interpretable Model Explanations for Graph Neural Networks[J].IEEE Transactions on Knowledge and Data Engineering,2023,35(7):6968-6972.
[51]XU J,PICEK S.Poster:Clean-Label Backdoor Attack on Graph Neural Networks[C]//Proceedings of CCS 2022.New York:ACM,2022:3491-3493.
[52]YANG C F,WU Q,LI H,et al.Generative Poisoning AttackMethod Against Neural Networks[J].CoRR:abs/1703.01340,2017.
[53]CHEN J Y,XIONG H Y,ZHENG H B,et al.Dyn-Backdoor:Backdoor Attack on Dynamic Link Prediction[J].IEEE Tran-sactions on Network Science and Engineering,2024,11(1):525-542.
[54]GOODFELLOW I J,SHLENS J,SZEGEDY C.Explaining and Harnessing Adversarial Examples[C]//Proceedings of the ICLR 2015.San Diego,CA:OpenReview.net,2015.
[55]MADRY A,MAKELLOV A,SCHMIDT L,et al.Towards Deep Learning Models Resistant to Adversarial Attacks[C]//Proceedings of ICLR 2018.San Diego,CA:OpenReview.net,2018.
[56]SHENG Y,CHEN R,CAI G Y,et al.Backdoor Attack of Graph Neural Networks Based on Subgraph Trigger[C]//Proceedings of CollaborateCom 2021.Berlin:Springer,2021:276-296.
[57]XU J,XUE M H,PICEK S.Explainability-based Backdoor Attacks Against Graph Neural Networks[C]//Proceedings of the 3rd ACM Workshop on Wireless Security and Machine Lear-ning.New York:ACM,2021:31-36.
[58]YING Z T,BOURGEOIS D,YOU J X,et al.GNNExplainer:Generating Explanations for Graph Neural Networks[C]//Proceedings of NeurIPS 2019.New York:Curran Associates Inc.,2019:9240-9251.
[59]CHEN L Y,YAN N,ZHANG B Y,et al.A General Backdoor Attack to Graph Neural Networks Based on Explanation Me-thod[C]//Proceedings of IEEE TrustCom 2022.Piscataway,NJ:IEEE,2022:759-768.
[60]WANG H W,LIU T H,SHENG Z Y,et al.Explanatory Subgraph Attacks Against Graph Neural Network[J].Neural Networks,2024,172:106097.
[61]TONG H B,MA H F,SHEN H,et al.Key Substructure-Driven Backdoor Attacks on Graph Neural Networks[C]//LNCS 15020:Proceedings of ICANN 2024.Berlin:Springer,2024:159-174.
[62]ZHENG H B,XIONG H Y,CHEN J Y,et al.Motif-Backdoor:Rethinking the Backdoor Attack on Graph Neural Networks via Motifs[J].IEEE Transactions on Computational Social Systems,2024,11(2):2479-2493.
[63]ALRAHIS L,PATNAIK S,HANIF M A,et al.PoisonedGNN:Backdoor Attack on Graph Neural Networks-Based Hardware Security Systems[J].IEEE Transactions on Computers,2023,72(10):2822-2834.
[64]ALRAHIS L,SINANOGLU O.Graph Neural Networks forHardware Vulnerability Analysis-Can You Trust Your GNN?[C]//Proceedings of IEEE VTS 2023.Piscataway,NJ:IEEE,2023:1-4.
[65]LI L Y,SONG D M,LI X N,et al.Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning[C]//Procee-dings of EMNLP 2021.ACL,2021:3023-3032.
[66]DUMFORD J,SCHEIRER W J.Backdooring ConvolutionalNeural Networks via Targeted Weight Perturbations[C]//Proceedings of IEEE IJCB 2020.Piscataway,NJ:IEEE,2020:1-9.
[67]HONG S,CARLINI N,KURAKIN A.Handcrafted Backdoors in Deep Neural Networks[C]//Proceedings of NeurIPS 2022.New York:Curran Associates Inc.,2022:8068-8080.
[68]XU J,WANG R,KOFFAS S,et al.More is Better(Mostly):On the Backdoor Attacks in Federated Graph Neural Networks[C]//Proceedings of ACSAC 2022.New York:ACM,2022:684-698.
[69]ZHUANG H M,YU M X,WANG H,et al.Backdoor Federated Learning by Poisoning Backdoor-critical Layers[C]//Procee-dings of ICLR 2024.San Diego,CA:OpenReview.net,2024.
[70]BAGDASARYAN E,VEIT A,HUA Y Q,et al.How to Backdoor Federated Learning[C]//Proceedings of AISTATS 2020.PMLR,2020:2938-2948.
[71]LIU Y Q,MA S Q,AAFER Y,et al.Trojaning Attack on Neural Networks[C]//Proceedings of NDSS 2018.Reston,Virginia:The Internet Society,2018.
[72]SALEM A,BACKES M,ZHANG Y.Don’t Trigger Me!ATriggerless Backdoor Attack Against Deep Neural Networks[J].CoRR:abs/2010.03282,2020.
[73]ZOU M H,SHI Y,WANG C L,et al.PoTrojan:Powerful Neural-Level Trojan Designs in Deep Learning Models[J].CoRR:abs/1802.03043,2018.
[74]JIANG B C,LI Z.Defending Against Backdoor Attack on Graph Neural Network by Explainability[J].arXiv:2209.02902,2022.
[75]YANG X,ZHOU K,LAI Y N,et al.Defense-as-a-Service:Black-box Shielding Against Backdoored Graph Models[J].CoRR:abs/2410.04916,2024.
[76]YUAN H,TANG J L,HU X,et al.XGNN:Towards Model-Level Explanations of Graph Neural Networks[C]//Procee-dings of KDD 2020.New York:ACM,2020:430-438.
[77]LUO D S,CHENG W,XU D K,et al.Parameterized Explainer for Graph Neural Network[C]//Proceedings of NeurIPS 2020.New York:Curran Associates Inc.,2020.
[78]POPE P E,KOLORI S,ROSTAMI M,et al.ExplainabilityMethods for Graph Convolutional Neural Networks[C]//Proceedings of CVPR 2019.Piscataway,NJ:IEEE,2019:10772-10781.
[79]DOWNER J,WANG R,WANG B H.Securing GNNs:Explanation-Based Identification of Backdoored Training Graphs[J].CoRR:abs/2403.18136,2024.
[80]CHEN J Y,XIONG H Y,MA H N,et al.CLB-Defense:Based on Contrastive Learning Defense Forgraph Neural Network Against Backdoor Attack[J].Journal on Communications,2023,44(4):154-166.
[81]YUAN D Q,XU X H,YU L,et al.E-SAGE:Explainability-bsed Defense Against Backdoor Attacks on Graph Neural Networks[C]//Proceedings of WASA 2024.Berlin:Springer,2024:402-414.
[82]SUNDARARAJAN M,TALY A,YAN Q Q.Gradients ofCounterfactuals[J].CoRR:abs/1611.02639,2016.
[83]SUNDARARAJAN M,TALY A,YAN Q Q.Axiomatic Attribution for Deep Networks[C]//Proceedings of ICML 2017.New York:PMLR,2017:3319-3328.
[84]XING X G,XU M,BAI Y J,et al.A Graph Backdoor Detection Method for Data Collection Scenarios[J].Cybersecurity,2025,8(1):1-12.
[85]LIN X,LI M J,WANG Y S.MADE:Graph Backdoor Defense with Masked Unlearning[J].CoRR:abs/2411.18648,2024.
[86]LI Y G,LYU X X,KOREN N,et al.Anti-Backdoor Learning:Training Clean Models on Poisoned Data[C]//Proceedings of NeurIPS 2021.New York:Curran Associates Inc.,2021:14900-14912.
[87]TRAN B,LI J,MADRY A.Spectral Signatures in Backdoor Attacks[C]//Proceedings of Neur IPS 2018.New York:Curran Associates Inc.,2018:8011-8021.
[88]WU S X,HE Q Y,ZHANG Y,et al.Debiasing Backdoor Attack:A Benign Application of Backdoor Attack in Eliminating Data Bias[J].Information Sciences,2023,643:119171.
[89]SUI H,CHEN B,ZHANG J L,et al.DMGNN:Detecting andMitigating Backdoor Attacks in Graph Neural Networks[J].CoRR:abs/2410.14105,2024.
[90]LIU C,HUANG H,XING Y J,et al.Boosting Graph Robustness Against Backdoor Attacks:An Over-Similarity Perspective[J].CoRR:abs/2502.01272,2025.
[91]ESTER M,KRIEGEL H P,SANDER J,et al.A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise[C]//Proceedings of KDD 1996.Menlo Park,CA:AAAI,1996:226-231.
[92]ZHU Y X,MANDULAK M,WU K,et al.On the Robustness of Graph Reduction Against GNN Backdoor[C]//Proceedings of AISec 2024.New York:ACM,2024:65-76.
[93]ZHANG H L,BAI Y J,CHEN Y J,et al.BARBIE:Robust Backdoor Detection Based on Latent Separability[C]//Procee-dings of NDSS 2025.Reston,Virginia:The Internet Society,2025.
[94]ZHANG J L,ZHU C C,RAO B S,et al.“No Matter What You Do”:Purifying GNN Models via Backdoor Unlearning[J].ar-Xiv:2410.01272,2024.
[95]SELVARAJU R R,COGSWELL M,DAS A,et al.Grad-CAM:Visual Explanations from Deep Networks via Gradient-Based Localization[J].International Journal of Computer Vision,2020,128(2):336-359.
[96]ZHANG J L,RAO B S,ZHU C C,et al.Fine-Tuning is NotFine:Mitigating Backdoor Attacks in GNNs with Limited Clean Data[J].arXiv:2501.05835,2025.
[97]ZHANG Z W,LIN M H,XU J J,et al.Robustness InspiredGraph Backdoor Defense[C]//Proceedings of ICLR 2025.San Diego,CA:OpenReview.net,2025.
[98]WAN G C,SHI Z T,HUANG W K,et al.Energy-Based Backdoor Defense Against Federated Graph Learning[C]//Procee-dings of ICLR 2025.San Diego,CA:OpenReview.net,2025.
[99]WANG X,ZHANG Z Y,XIAO L X,et al.Towards Multi-Modal Graph Large Language Model[J].CoRR:abs/2506.09738,2025.
[100]GAO H L,LI X,ZHAO L,et al.HeteroBA:A Structure-Manipulating Backoor Attack on Heterogeneous Graphs[J].CoRR:abs/2505.21140,2025.
[101]CHEN J Y,CAO Z Q,ZHENG H B,et al.Security Review of Model Quantification Methods[J].Journal of Chinese Computer Systems.2025,46(6):1473-1490.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!