Computer Science ›› 2024, Vol. 51 ›› Issue (10): 234-246.doi: 10.11896/jsjkx.230700122

• Database & Big Data & Data Science • Previous Articles     Next Articles

NLGAE:A Graph Autoencoder Model Based on Improved Network Structure and Loss Functionfor Node Classification Task

LIAO Bin1, ZHANG Tao2, YU Jiong3, LI Min3   

  1. 1 College of Big Data Statistics,Guizhou University of Finance and Economics,Guiyang 550025,China
    2 College of Information Engineering,Guizhou University of Traditional Chinese Medical,Guiyang 550025,China
    3 School of Information Science and Engineering,Xinjiang University,Urumqi 830008,China
  • Received:2023-07-18 Revised:2024-01-24 Online:2024-10-15 Published:2024-10-11
  • About author:LIAO Bin,born in 1986,Ph.D,associate professor,Ph.D supervisor,is a member of CCF(No.20598M).His main research interests include deep learning,data mining and big data computing model.
    ZHANG Tao,born in 1988.Ph.D.Her main research interests include machine learning,data mining and complex network analysis,etc.
  • Supported by:
    National Natural Science Foundation of China(61562078) and Xinjiang Tianshan Youth Plan Project(2018Q073).

Abstract: The universally accepted technique to address the issues of computational complexity and high spatial complexity of adjacency matrix due to non-Euclidean spatiality of graph data is to use graph embedding methods to map high-dimensional heterogeneous information,such as graph topology and node attributes,to dense vector space.In this paper,based on the analysis of the problems of the classical graph auto-encoder model GAE(graph auto-encoder) and VGAE(variational graph auto-encoder),we try to improve the graph embedding method based on graph auto-encoder from three aspects:encoder,decoder and loss function,and propose a graph auto-encoder model NLGAE based on the improved network structure and loss function.First,in the model structure design,on the one hand,the stacked graph convolutional layers in the encoder are inverted to solve the problem of lack of flexibility and insufficient expressiveness of the non-reference decoder in GAE and VGAE,on the other hand,the graph convolutional network GAT is introduced to solve the problem of solidifying the weight coefficients between nodes by introducing the attention mechanism.Second,both the graph structure and the node feature information could be taken into account by the redesigned loss function.The comparative experimental results show that,as an unsupervised model,the proposed NLGAE can learn high-quality node embedding features and outperform not only traditional unsupervised models DeepWalk,GAE,GrpahMAE,GATE,etc.in node classification tasks,but also supervised graph neural network models such as GAT and GCN in the case of selecting an appropriate classification model.

Key words: Graph representation learning, Graph auto-encoder, Attention mechanism, Node classification

CLC Number: 

  • TP391
[1]YU Y X,LIU M,ZHANG H Y.Research on User Behavior Understanding and Personalized Service Recommendation Algorithm in Twitter Social Networks[J].Journal of Computer Research and Development,2020,57(7):1369-1380.
[2]ZHANG Y J,DONG Z,MENG X W.Research on Personalized Advertising Recommendation System and Their Applications[J].Chinese Journal of Computers,2021,44(3):531-563.
[3]PIETRO B,MONICA B,FRANCO S.Molecular generativegraph neural networks for drug discovery[J].Neurocomputing,2021,450(8):242-252.
[4]JIANG W W.Graph-based deep learning for communication networks:a survey[J].Computer Communications,2022,185(3):40-54.
[5]SEYYED M S,ALI M,MOHAMMAD G.Privacy protectionscheme for mobile social network[J].Journal of King Saud University-Computer and Information Sciences,2022,34(7):4062-4074.
[6]SONG D Q,WANG W P,FAN Y,et al.Quantifying the structural and temporal characteristics of negative links in signed citation networks[J].Information Processing & Management,2022,59(4):1-16.
[7]KAJIKAWA Y,MEJIA C,WU M,et al.Academic landscape oftechnological forecasting and social change through citation network and topic analyses[J].Technological Forecasting and Social Change,2022,182(9):1-15.
[8]GAO H,LI W,CAI H.Distributed control of a flywheel energy storage system subject to unreliable communication network[J].Energy Reports,2022,8(9):11729-11739.
[9]AUSTIN K,JORGE L,EMERSON M.A recursive logit model with choice aversion and its application to transportation networks[J].Transportation Research Part B:Methodological,2022,155(1):47-71.
[10]ZHANG T,YU J,LIAO B,et al.The Construction and Analysis of Pass Network Graph Based on GraphX[J].Journal of Computer Research and Development,2016,53(12):2729-2752.
[11]SIDDHANT D,SUNDEEP P C.A computational approach to drug repurposing using graph neural networks[J].Computers in Biology and Medicine,2022,150(9):1-14.
[12]SOVAN S,ANUP K H,SOUMYENDU S B,et al.Computa-tional modeling of human-nCoV protein-protein interaction network[J].Methods,2022,203(7):488-497.
[13]APURVA B,SÉBASTIEN D L,THOMAS S.Construction and contextualization approaches for protein-protein interaction networks[J].Computational and Structural Biotechnology Journal,2022,20(6):3280-3290.
[14]FEDERICO C,IACOPO P,ANTONIO P,ILARIA A,et al.Gene network analysis defines a subgroup of small cell lung cancer patients with short survival[J].Clinical Lung Cancer,2022,23(6):510-521.
[15]ZHAO P,ZHANG S,LIU J,et al.Zero-shot learning via the fusion of generation and embedding for image recognition[J].Information Sciences,2021,578(11):831-847.
[16]LIU X,WANG L,HAN X.Transformer with peak suppression and knowledge guidance for fine-grained image recognition[J].Neurocomputing,2022,492(10):137-149.
[17]WASEEM U,AMIN U,TANVEER H,et al.Artificial Intelligence of Things-assisted two-stream neural network for anomaly detection in surveillance Big Video Data[J].Future Generation Computer Systems,2022,129(4):286-297.
[18]KEVAL D,YASIN Y.Online anomaly detection in surveillance videos with asymptotic bound on false alarm rate[J].Pattern Recognition,2021,114(6):1-28.
[19]SAHRAEIAN R,VAN C D.Cross-entropy training of DNN ensemble acoustic models for low-resource ASR[J].IEEE/ACM Transactions on Audio,Speech,and Language Processing,2018,26(11):1991-2001.
[20]YI J,TAO J,WEN Z,et al.Language-adversarial transfer lear-ning for low-resource speech recognition[J].IEEE/ACM Tran-sactions on Audio,Speech,and Language Processing,2019,27(3):621-630.
[21]ZHAO Y,KOMACHI M,KAJIWARA T,et al.Region-atten-tive multimodal neural machine translation[J].Neurocomputing,2022,476(3),1-13.
[22]KUMAR A,PRATAP A,SINGH A K,et al.Addressing domain shift in neural machine translation via reinforcement lear-ning[J].Expert Systems with Applications,2022,201(9):1-18.
[23]GU J,WANG Z,KUEN J,et al.Recent advances in convolu-tional neural networks[J].Pattern Recognition,2018,77(5):354-377.
[24]LIU J W,WANG Y F,LUO X L.Research and Development on Deep Memory Network[J].Chinese Journal of Computers,2021,44(8):1549-1589.
[25]GUO M Y,SUN Z Y,ZHU Y Q,et al.A Framework for Fair Comparison of Network Representation Learning Algorithm Performance Based on Hyperparameter Tuning[J].Chinese Journal of Computers,2022,45(5):897-917.
[26]CUI P,WANG X,PEI J,et al.A survey on network embedding[J].IEEE Transactions on Knowledge and Data Engineering,2019,31(5):833-852.
[27]ROWEIS S,SAUL L.Nonlinear Dimensionality Reduction byLocally Linear embedding[J].Science,2000,290(5500):2323-2326.
[28]BELKIN M,NIYOGI P.Laplacian eigenmaps and spectral techniques for embedding and clustering[C]//Proceedings of the Advances in Neural Information Processing Systems.Stroudsburg:ACL,2001:585-591.
[29]SHAW B,JEBARA T.Structure preserving embedding[C]//Proceedings of the 26th Annual International Conference on Machine Learning.New York:ACM,2009:937-944.
[30]LUO D,DING C H Q,NIE F,et al.Cauchy graph embedding[C]//Proceedings of the 28th International Conference on Machine Learning.New York:ACM,2011:553-560.
[31]AHMED A,SHERVASHIDZE N,NARAYANAMURTHY S,et al.Distributed large-scale natural graph factorization[C]//Proceedings of the 22th International Conference on World Wide Web.New York:ACM,2013:37-48.
[32]CAO S,LU W,XU Q.Grarep:Learning graph representations with global structural information[C]//Proceedings of the 24th ACM International on Conference on Information and Know-ledge Management.New York:ACM,2015:891-900.
[33]OU M,CUI P,PEI J,et al.Asymmetric transitivity preserving graph embedding[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.New York:ACM,2016:1105-1114.
[34]QIU J,DONG Y,MA H,et al.Netsmf:Large-scale network em-bedding as sparse matrix factorization[C]//Proceedings of the 28th International Conference on World Wide Web.New York:ACM,2019:1509-1520.
[35]ZHANG J,DONG Y,WANG Y,et al.ProNE:Fast and Scalable Network Representation Learning[C]//Proceedings of the International Joint Conference on Artificial Intelligence.Menlo Park,CA:AAAI,2019:4278-4284.
[36]SHAW B,JEBARA T.Structure preserving embedding[C]//Proceedings of the 26th Annual International Conference on Machine Learning.New York:ACM,2009:937-944.
[37]AHMED A,SHERVASHIDZE N,NARAYANAMURTHY S,et al.Distributed large-scale natural graph factorization[C]//Proceedings of the 22th International Conference on World Wide Web.New York:ACM,2013:37-48.
[38]QIU J,DONG Y,MA H,et al.Network embedding as matrix factorization:Unifying deepwalk,line,pte,and node2vec[C]//Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining.New York:ACM,2018:459-467.
[39]PEROZZI B,AL-RFOU R,SKIENA S.DeepWalk:OnlineLearning of Social Representations [C]//Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.New York:ACM,2014:701-710.
[40]GROVER A,LESKOVEC J.Node2Vec:Scalable Feature Lear-ning for Networks [C]//Proceedings of ACM Sigkdd Interna-tional Conference on Knowledge Discovery & Data Mining.New York:ACM,2016:855-864.
[41]TANG J,QU M,WANG M,et al.LINE:Large-scale information network embedding [C]//Proceedings of the 24th International Conference on World Wide Web.New York:ACM,2015:1067-1077.
[42]GOLDBERG Y,LEVY O.word2vec Explained:deriving Mikolovet al.'s negative-sampling word-embedding method[J].arXiv:1402.3722,2014.
[43]RIBEIRO L F R,SAVERESE P H P,FIGUEIREDO D R.Struct2Vec:Learning Node Representations from Structural Identity[C]//Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.New Yor:ACM,2017:385-394.
[44]CHEN H,PEROZZI B,HU Y,et al.Harp:Hierarchical representation learning for networks[C]//Proceedings of the 32th AAAI Conference on Artificial Intelligence.Menlo Park,CA:AAAI,2018:1-32.
[45]PEROZZI B,KULKARNI V,SKIENA S.Walklets:Multiscale graph embeddings for interpretable network classification[J].arXiv:1605.02115,2016.
[46]CHU X K,FAN X X,BI J P.Position-Aware Network Representation Learning via K-Step Mutual Information Estimation[J].Journal of Computer Research and Development,2021,58(8):1612-1623.
[47]WANG D,CUI P,ZHU W.Structural deep network embedding[C]//Proceedings of the 22th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.New York:ACM,2016:1225-1234.
[48]LV L,CHENG J,PENG N,et al.Auto-encoder based graphconvolutional networks for online financial anti-fraud[C]//Proceedings of the IEEE Conference on Computational Intelligence for Financial Engineering and Economics.Piscataway,NJ:IEEE,2019:1-6.
[49]KIPF T N,WELLING M.Variational graph auto-encoders[C]//Proceedings of the Advances in Neural Information Processing Systems Workshop on Bayesian Deep Learning.Cambridge,MA:MITPress,2016.
[50]HASANZADEH A,HAJIRAMEZANALI E,DUFFIELD N,et al.Semi-implicit graph variational auto-encoders[C]//Procee-dings of the 33th Conference on Neural Information Processing Systems.Cambridge,MA:MITPress,2019:10711-10722.
[51]YUAN L N,HU H,LIU Z.Graph Representation LearningBased on Multi-Channel Graph Convolutional Autoencoders[J].Computer Engineering,2023,49(2):150-160,174.
[52]KIPF T N,WELLING M.Semi-supervised classification withgraph convolutional networks[C]//Proceedings of the 6th International Conference on Learning Representations.Vancouver,Canada,2017.
[53]HOU Z,LIU X,CEN Y,et al.GraphMAE:Self-supervisedmasked graph autoencoders[C]//Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mi-ning.2022:594-604.
[54]SALEHI A,DAVULCU H.Graph attention auto-encoders[J].arXiv:1905.10715,2019.
[55]VELIČKOVIĆ P,CUCURULL G,CASANOVA A,et al.Graph attention networks[C]//Proceedings of the 7th International Conference on Learning Representations.Vancouver,Canada,2018.
[56]MICHAËL D,XAVIER B,PIERRE V.Convolutional neuralnetworks on graphs with fast localized spectral filtering[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems(NIPS'16).New York,USA,2016:3844-3852.
[57]HAMILTON W L,YING R,LESKOVEC J.Inductive represen-tation learning on large graphs[C]//Proceedings of the 31th International Conference on Neural Information Processing Systems.Cambridge,USA,2017:1025-1035.
[1] LI Yunchen, ZHANG Rui, WANG Jiabao, LI Yang, WANG Ziqi, CHEN Yao. Re-parameterization Enhanced Dual-modal Realtime Object Detection Model [J]. Computer Science, 2024, 51(9): 162-172.
[2] HU Pengfei, WANG Youguo, ZHAI Qiqing, YAN Jun, BAI Quan. Night Vehicle Detection Algorithm Based on YOLOv5s and Bistable Stochastic Resonance [J]. Computer Science, 2024, 51(9): 173-181.
[3] LIU Qian, BAI Zhihao, CHENG Chunling, GUI Yaocheng. Image-Text Sentiment Classification Model Based on Multi-scale Cross-modal Feature Fusion [J]. Computer Science, 2024, 51(9): 258-264.
[4] LI Zhe, LIU Yiyang, WANG Ke, YANG Jie, LI Yafei, XU Mingliang. Real-time Prediction Model of Carrier Aircraft Landing Trajectory Based on Stagewise Autoencoders and Attention Mechanism [J]. Computer Science, 2024, 51(9): 273-282.
[5] LIU Qilong, LI Bicheng, HUANG Zhiyong. CCSD:Topic-oriented Sarcasm Detection [J]. Computer Science, 2024, 51(9): 310-318.
[6] YAO Yao, YANG Jibin, ZHANG Xiongwei, LI Yihao, SONG Gongkunkun. CLU-Net Speech Enhancement Network for Radio Communication [J]. Computer Science, 2024, 51(9): 338-345.
[7] LIU Sichun, WANG Xiaoping, PEI Xilong, LUO Hangyu. Scene Segmentation Model Based on Dual Learning [J]. Computer Science, 2024, 51(8): 133-142.
[8] ZHANG Rui, WANG Ziqi, LI Yang, WANG Jiabao, CHEN Yao. Task-aware Few-shot SAR Image Classification Method Based on Multi-scale Attention Mechanism [J]. Computer Science, 2024, 51(8): 160-167.
[9] WANG Qian, HE Lang, WANG Zhanqing, HUANG Kun. Road Extraction Algorithm for Remote Sensing Images Based on Improved DeepLabv3+ [J]. Computer Science, 2024, 51(8): 168-175.
[10] XIAO Xiao, BAI Zhengyao, LI Zekai, LIU Xuheng, DU Jiajin. Parallel Multi-scale with Attention Mechanism for Point Cloud Upsampling [J]. Computer Science, 2024, 51(8): 183-191.
[11] PU Bin, LIANG Zhengyou, SUN Yu. Monocular 3D Object Detection Based on Height-Depth Constraint and Edge Fusion [J]. Computer Science, 2024, 51(8): 192-199.
[12] ZHANG Junsan, CHENG Ming, SHEN Xiuxuan, LIU Yuxue, WANG Leiquan. Diversified Label Matrix Based Medical Image Report Generation [J]. Computer Science, 2024, 51(8): 200-208.
[13] WANG Chao, TANG Chao, WANG Wenjian, ZHANG Jing. Infrared Human Action Recognition Method Based on Multimodal Attention Network [J]. Computer Science, 2024, 51(8): 232-241.
[14] ZHANG Lu, DUAN Youxiang, LIU Juan, LU Yuxi. Chinese Geological Entity Relation Extraction Based on RoBERTa and Weighted Graph Convolutional Networks [J]. Computer Science, 2024, 51(8): 297-303.
[15] YUAN Lining, FENG Wengang, LIU Zhao. Multi-channel Graph Convolutional Networks Enhanced by Label Propagation Algorithm [J]. Computer Science, 2024, 51(8): 304-312.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!