计算机科学 ›› 2026, Vol. 53 ›› Issue (3): 64-77.doi: 10.11896/jsjkx.250700094

• 基于AGI技术的智能信息系统 • 上一篇    下一篇

基于节点影响力的图遗忘学习近似最差遗忘集构造算法

赵正彪1, 卢涵宇1,2, 丁红发3,4   

  1. 1 贵州大学大数据与信息工程学院 贵阳 550025
    2 贵州工程应用技术学院大数据应用工程研究中心 贵州 毕节 551700
    3 贵州财经大学信息学院 贵阳 550025
    4 贵州省主权区块链全省重点实验室(贵州财经大学) 贵阳 550025
  • 收稿日期:2025-07-15 修回日期:2025-10-29 发布日期:2026-03-12
  • 通讯作者: 丁红发(hfding@mail.gufe.edu.cn)
  • 作者简介:(gs.zbzhao23@gzu.edu.cn)
  • 基金资助:
    国家自然科学基金(62566006);贵州省科技计划项目(黔科合成果[2024]重大017,黔科合平台ZSYS[2024]003, 黔科合基础-ZK[2022]一般018);贵州省教育厅自然科学研究项目(黔教技[2024]326号,黔教技[2023]065号,黔教技[2023]014号);贵州省普通本科高校“金课”(2024JKXN0078);毕节市人工智能应用创新人才团队资助项目((2023)09)

Node-influence Based Construction Algorithm of Approximate Worst-case Forgetting Set for Graph Unlearning

ZHAO Zhengbiao1, LU Hanyu1,2, DING Hongfa3,4   

  1. 1 College of Big Data and Information Engineering, Guizhou University, Guiyang 550025, China
    2 School of Information Engineering, Guizhou University of Engineering Science, Bijie, Guizhou 551700, China
    3 School of Information, Guizhou University of Finance and Economics, Guiyang 550025, China
    4 Guizhou Province Key Laboratory of Sovereign Blockchain, Guizhou University of Finance and Economics, Guiyang 550025, China
  • Received:2025-07-15 Revised:2025-10-29 Online:2026-03-12
  • About author:ZHAO Zhengbiao,born in 1998,master,is a member of CCF(No.Y6301G).His main research interests include data security and privacy protection.
    DING Hongfa,born in 1988,Ph.D,associate professor,is a member of CCF(No.36866M).His main research interests include data security and privacy protection,cryptographic algorithm and protocol design.
  • Supported by:
    National Natural Science Foundation of China(62566006),Science and Technology Program of Guizhou(Qian Sci. Contr. Achiev. [2024]Major-17,Qian-Sci. Sontr. Platform ZSYS[2024]003, QKH-Basic-ZK[2022]General018),Natural Science Researching Program of D.o.E. of Guizhou (Qian Edu. & Tech. [2024] 326,Qian Edu. & Tech. [2023] 065,Qian Edu. & Tech. [2023]014),“Golden Course” of Ordinary Undergraduate Universities in Guizhou Province(2024JKXN0078) and Artificial Intelligence Application Innovation Talent Team Project of Bijie City((2023)09).

摘要: 图神经网络(Graph Neural Networks,GNNs)因其在社交网络、推荐系统等领域的广泛应用而备受关注。近年来,个人信息遗忘权、数据产权保护、数据使用权过期等原因产生的数据遗忘需求不断加剧,使得图遗忘学习、深度遗忘学习和大模型遗忘等遗忘学习成为人工智能领域的研究热点。然而,现有研究大多设置为随机遗忘,忽视了对数据所有者数据遗忘权的最大保障,忽视了构造更极端场景以对不同遗忘学习算法进行深度综合评估。为此,面向图遗忘学习,提出一种基于图数据节点影响力的近似最差遗忘集构造算法,以近似最优构造图遗忘学习的遗忘节点样本集合。该算法结合节点的训练损失和结构中心性对图数据训练样本的节点影响力进行排序,从中识别出最具影响力且最难遗忘的节点集,从模型效用影响和节点重要性两个方面综合优选遗忘节点集合。利用不同图神经网络模型、图数据集和多个图遗忘学习算法进行实验,所提算法能使图遗忘学习算法更有效地降低模型效用,相较于随机遗忘策略模型效用下降幅度达15%;同时,该算法显著增强了不同图遗忘学习算法在多个指标上的差异性,能够更有效地对遗忘学习算法进行多维度评估。

关键词: 图神经网络, 遗忘学习, 隐私保护, 最差遗忘集, 节点影响力

Abstract: GNNs have attracted significant attention due to their wide-ranging applications in fields such as social networks and recommendation systems.Motivated by intensified demands for data removal,including the right to be forgotten,data-ownership safeguards,and the expiration of data-use permissions,research on unlearning has accelerated,particularly in graph unlearning,deep unlearning,and unlearning in large language models.However,most existing studies simulate forgetting by randomly removing data,which fails to provide the strongest guarantee for data owners’ right to be forgotten and neglects the construction of extreme scenarios required to comprehensively evaluate different unlearning algorithms.To address these issues,this paper proposes a worst-case forgetting set construction algorithm based on node influence in graph data,aiming to approximately construct an optimal set of nodes to be forgotten for graph unlearning.This algorithm ranks the influence of training sample nodes by combining each node’s training loss and structural centrality,thereby identifying the most influential and hardest-to-forget nodes.It then selects the forgetting set by jointly considering each node’s impact on model performance and its importance.Experiments on different GNN models,graph datasets,and unlearning algorithms show that the proposed algorithm enables graph unlearning methods to more effectively reduce model performance.And it achieves up to a 15% greater reduction in performance compared to random forgetting strategies.At the same time,the approach significantly accentuates performance differences among various graph unlearning algorithms across multiple metrics,facilitating more effective multi-dimensional evaluation of these unlearning algorithms.

Key words: Graph neural network, Machine unlearning, Privacy protection, Worst-case forget set, Node influence

中图分类号: 

  • TP309.2
[1]SUN H N,LI X K,WU Z G,et al.Breaking the entanglement of homophily and heterophily in semi-supervised node classification[C]//Proceedings of the 40th IEEE International Conference on Data Engineering(ICDE 2024).IEEE,2024:2379-2392.
[2]DEFFERRARD M,BRESSON X,VANDERGHEYNST P.Convolutional neural networks on graphs with fast localized spectral filtering[C]//Proceedings of the 30th Conference on Neural Information Processing Systems(NIPS 2016).Red Hook,NY:Curran Associates Inc.,2016:3844-3852.
[3]LUO Y K,SHI L,WU X M,et al.Classic GNNs are strong baselines:reassessing GNNs for node classification[C]//Proceedings of the 37th Conference on Neural Information Processing Systems(NeurIPS 2024).Red Hook,NY:Curran Associates Inc.,2024:97650-97669.
[4]ZHU C Z,WANG X,ZHENG Q.A graph convolutional net-work framework based on element separation and global attention[J].Journal of Computer Research and Development,2024,61(8):2008-2019.
[5]ZOU H Q,SHI B Z,SONG L Y,et al.A survey on complex spatio-temporal data mining methods based on graph neural networks[J].Journal of Software,2025,36(4):1811-1843.
[6]XIAO G Q,LI X Q,CHEN Y D,et al.A survey of large-scale graph neural network research[J].Chinese Journal of Computers,2024,47(1):148-171.
[7]LIN J J,YE Z L,ZHAO H X,et al.A survey on hypergraph neural networks[J].Journal of Computer Research and Deve-lopment,2024,61(2):362-384.
[8]ZHANG H F,ZHANG F,WANG H,et al.A novel privacy-preserving graph convolutional network via secure matrix multiplication[J].Information Sciences,2023,657:119897.
[9]WANG Z X,CHEN T S,REN J,et al.Deep reasoning withknowledge graph for social relationship understanding[C]//Proceedings of the 27th International Joint Conference on Artificial Intelligence(IJCAI 2018).International Joint Conferences on Artificial Intelligence Organization,2018:1021-1028.
[10]CAI X H,HUANG C,XIA L H,et al.LightGCL:Simple yet effective graph contrastive learning for recommendation[C]//Proceedings of the 11th International Conference on Learning Representations(ICLR 2023).New Orleans,LA:OpenReview.net,2023:1-15.
[11]HE X N,DENG K,WANG X,et al.LightGCN:simplifying and powering graph convolution network for recommendation[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval(SIGIR 2020).New York:ACM,2020:639-648.
[12]JIN W.Graph mining with graph neural networks[C]//Proceedings of the 14th ACM International Conference on Web Search and Data Mining(WSDM 2021).New York:ACM,2021:1119-1120.
[13]LIU Z Y,JIANG Y,SHEN J Y,et al.A survey on federated unlearning:Challenges,Methods,and future directions[J].ACM Computing Surveys,2025,57(1):2.
[14]WANG C L,HUAI M,WANG D.Inductive graph unlearning[C]//Proceedings of the 32nd USENIX Security Symposium(USENIX Security 2023).Boston,MA:USENIX Association,2023:3205-3222.
[15]CHENG J L,DASOULAS G,HE H,et al.GNNDelete:A general unlearning strategy for graph neural networks[C]//Proceedings of the 11th International Conference on Learning Representations(ICLR 2023).New Orleans,LA:OpenReview.net,2023:1-23.
[16]LI X K,ZHAO Y L,WU Z Y,et al.Towards effective and general graph unlearning via mutual evolution[C]//Proceedings of the 38th AAAI Conference on Artificial Intelligence(AAAI 2024).Association for the Advancement of Artificial Intelligence,2024:13682-13690.
[17]WU J C,YANG Y,QIAN Y C,et al.GIF:A general graph unlearning strategy via influence function[C]//Proceedings of the ACM Web Conference 2023(WWW 2023).ACM,2023:651-661.
[18]FAN C C,LIU J Q,HERO A O,et al.Challenging forgets:Unveiling the worst-case forget sets in machine unlearning[C]//Proceedings of the 18th European Conference on Computer Vision(ECCV 2024).Berlin:Springer,2024:278-297.
[19]WEI J Q,JIANG D,ZHANG C.On efficient single-source personalized PageRank computation in Online Social Networks[J].IEEE Transactions on Knowledge and Data Engineering,2025,37(6):3598-3612.
[20]MCCALLUM A,NIGAM K,RENNIE J,et al.Automating the construction of internet portals with machine learning[J].Information Retrieval,2000,3(2):127-163.
[21]SHCHUR O,MUMME M,BOJCHEVSKI A,et al.Pitfalls of graph neural network evaluation[J].CoRR:abs/1811.05868,2018.
[22]HU W,FEY M,ZITNIK M,et al.Open Graph Benchmark:Datasets for Machine Learning on Graphs[C]//Proceedings of the 34th Conference on Neural Information Processing Systems(NeurIPS 2020).Red Hook,NY:Curran Associates Inc.,2020:22118-22133.
[23]CHEN M,ZHANG Z K,WANG T H,et al.Graph unlearning[C]//Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security(CCS 2022).New York:ACM,2022:499-513.
[24]LI X,FAN B,WU Z,et al.Toward scalable graph unlearning:A node influence maximization based approach[J].CoRR:abs/2501.11823,2025.
[25]DONG Y S,ZHANG B C,LEI Z Y,et al.IDEA:A flexible framework of certified unlearning for graph neural networks[C]//Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining(KDD 2024).New York:ACM,2024:621-630.
[26]TAN J J,SUN F,QIU R C,et al.Unlink to unlearn:Simplifying edge unlearning in GNNs[C]//Proceedings of the ACM Web Conference 2024(WWW 2024).New York:ACM,2024:489-492.
[27]SINHA Y,MANDAL M,KANKANHALLI M S.Distill to delete:Unlearning in graph networks with knowledge distillation[J].CoRR:abs/2309.16173,2023.
[28]WU K,SHEN J,NING Y,et al.Certified edge unlearning forgraph neural networks[C]//Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining(KDD 2023).ACM,2023:2606-2617.
[29]ZHANG T,GAO Y H,ZHAO J,et al.Efficient exact and approximate betweenness centrality computation for temporal graphs[C]//Proceedings of the ACM Web Conference 2024(WWW 2024).New York:ACM,2024:2395-2406.
[30]IACOBUCCI D,MCBRIDE R,POPOVICH D L,et al.Confidence intervals for assessing sizes of social network centralities[J].Social Networking,2018,7(4):220-242.
[31]TANG J K,MUSOLESI M,MASCOLO C,et al.Analysing information flows and key mediators through temporal centrality metrics[C]//Proceedings of the 3rd Workshop on Social Network Systems(SNS 2010).New York:ACM,2010:3.
[32]LI A,CORNELIUS S P,LIU Y Y,et al.The fundamental advantages of temporal networks[J].Science,2017,358(6366):1042-1046.
[33]ROZENSHTEIN P,GIONIS A.Temporal pagerank[C]//Machine Learning and Knowledge Discovery in Databases.Cham:Springer,2016:674-689.
[34]YU E,FU Y,ZHOU J,et al.Predicting critical nodes in temporal networks by dynamic graph convolutional networks[J].Applied Sciences,2023,13(12):7272.
[35]YING Z,BOURGEOIS D,YOU J,et al.GNNExplainer:Generating explanations for graph neural networks[C]//Proceedings of the 33rd Conference on Neural Information Processing Systems(NeurIPS 2019).Curran Associates Inc.,2019:9240-9251.
[36]DUVAL A,MALLIAROS F D.GraphSVX:Shapley value explanations for graph neural networks[C]//Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases(ECML PKDD 2021).Cham:Springer,2021:302-318.
[37]BALDASSARRE F,AZIZPOUR H.Explainability techniquesfor graph convolutional networks[C]//Proceedings of the International Conference on Machine Learning,Workshop on Learning and Reasoning with Graph-Structured Representations(ICML 2019),New York:ACM,2019.
[38]YUAN H,YU H Y,WANG J,et al.On explainability of graph neural networks via subgraph explorations[C]//Proceedings of the 38th International Conference on Machine Learning(ICML 2021),New York:PMLR,2021:12241-12252.
[39]AKKAS S,AZAD A.GNNShap:scalable and accurate GNN explanation using shapley values[C]//Proceedings of the ACM Web Conference 2024(WWW 2024).New York,:ACM,2024:827-838.
[40]LI X,SHEN Q L,WANG H N,et al.LoReUn:Data itself implicitly provides cues to improve machine unlearning[C]//Proceedings of the 38th Conference on Neural Information Processing Systems(NeurIPS 2025).NeurIPS,2025:1.
[41]ZHAO K,KURMANJI M,BARBULESCU G O,et al.What makes unlearning hard and what to do about it[C]//Procee-dings of the 37th Conference on Neural Information Processing Systems(NeurIPS 2024).Red Hook,NY:Curran Associates Inc.,2024:12293-12333.
[42]XU K,HU W,LESKOVEC J,et al.How powerful are graph neural networks?[C]//Proceedings of the 7th International Conference on Learning Representations(ICLR 2019).New Orleans,LA:OpenReview.net,2019:9104-9121.
[43]TONEVA M,SORDONI A,COMBES R T D,et al.An empirical study of example forgetting during deep neural network Learning[C]//Proceedings of the 7th International Conference on Learning Representations(ICLR 2019).New Orleans,LA:OpenReview.net,2019:1768-1786.
[44]KOH P W,LIANG P.Understanding black-box predictions via influence functions[C]//Proceedings of the 34th International Conference on Machine Learning.JMLR.org,2017:1885-1894.
[45]MARIANI M S,MEDO M,ZHANG Y C.Ranking nodes in growing networks:When PageRank fails[J].Scientific Reports,2015,5(1):16181.
[46]KLICPERA J,BOJCHEVSKI A,GÜNNEMANN S.Predictthen propagate:Graph neural networks meet personalized PageRank[C]//Proceedings of the 7th International Conference on Learning Representations(ICLR 2019).New Orleans,LA:OpenReview.net,2019:2667-2682.
[47]XU H,ZHU T Q,ZHANG L F,et al.Machine unlearning:A survey[J].ACM Computing Surveys,2024,56(1):1-36.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!