Computer Science ›› 2021, Vol. 48 ›› Issue (12): 188-194.doi: 10.11896/jsjkx.210100203

• Database & Big Data & Data Science • Previous Articles     Next Articles

Attribute Network Representation Learning Based on Global Attention

XU Ying-kun, MA Fang-nan, YANG Xu-hua, YE Lei   

  1. College of Computer Science and Technology,Zhejiang University of Technology,Hangzhou 310023,China
  • Received:2021-01-26 Revised:2021-04-08 Online:2021-12-15 Published:2021-11-26
  • About author:XU Ying-kun,born in 1979,Ph.D,lecturer,is a member of China Computer Federation.His main research interests include machine learning and classification algorithm.
    YE Lei,born in 1979,Ph.D,associate professor.Her main research interests include machine learning and so on.
  • Supported by:
    National Natural Science Foundation of China(61773348) and Zhejiang Province Natural Science Foundation of China(LY20F020029).

Abstract: The attribute network not only has complex topology,its nodes also contain rich attribute information.Attribute network represent learning methods simultaneously extracts network topology and node attribute information to learn low-dimensional vector embedding of large attribute networks.It has very important and extensive applications in network analysis techniques such as node classification,link prediction and community identification.In this paper,we first obtain the embedded vector of the network structure according to the topology of the attribute network.Then we propose to learn the attribute information of adjacent nodes through global attention mechanism,use convolutional neural network to convolve the attribute information of the node to obtain the hidden vectors,and the weight vector and correlation matrix of global attention are generated from the hidden vectors of convolution,and then the attribute embedding vector of nodes is obtained.Finally,the structure embedding vector and the attribute embedding vector are connected to obtain the joint embedding vector which reflects the network structure and the attribute simultaneously.On three real data sets,the new algorithm is compared with the current eight network embedding models on tasks such as link prediction and node classification.Experimental results show that the new algorithm has good attribute network embedding effects.

Key words: Attribute embedding, Convolutional neural network, Global attention mechanism, Joint embedding, Structure embedding

CLC Number: 

  • TP391
[1]PEROZZI B,AL-RFOU R,SKIENA S.DeepWalk:Online Learning of Social Representations[C]//Proceedings of the ACM SIGKDD International Conference on Knowledge Disco-very and Data Mining.New York:ACM,2014:701-710.
[2]KIP F T N,WELLING M.Semi-supervised Classification with Graph Convolutional Networks[C]//5th International Confe-rence on Learning Representations.ShangHai,2017.
[3]WANG C,PAN S,LONG G,et al.MGAE:Marginalized Graph Autoencoder for Graph Clustering[C]//International Confe-rence on Information and Knowledge Management.Singapore:ACM,2017:889-898.
[4]XIONG F,WANG X,PAN S,et al.Social Recommendation with Evolutionary Opinion Dynamics[J].IEEE Transactions on Systems Man Cybernetics-Systems,2018,50(10):3804-3816.
[5] ZHAO X,LI X,ZHANG Z H,et al.Community discovery algorithm combining community embedding and node embedding[J].Computer Science,2020,47(12):279-284.
[6]CAI H,ZHENG V,CHANG K.A Comprehensive Survey of Graph Embedding:Problems,Techniques and Applications[J].IEEE Transactions on Knowledge and Data Engineering,2018,30(9):1616-1637.
[7]SHI C,HU B,ZHAO W,et al.Heterogeneous Information Network Embedding for Recommendation[J].IEEE Transactions on Knowledge and Data Engineering,2019,31(2):357-370.
[8]ZHOU L E,YOU J G.Social Recommendation with Embedding of Summarized Graphs[J].Journal of Chinese Computer Systems,2021,42(1):78-84.
[9]GROVER A,LESKOVEC J.Node2vec:Scalable Feature Lear- ning for Networks[C]//Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mi-ning.San Francisco:ACM,2016:855-864.
[10]WANG D,CUI P,ZHU W.Structural Deep Network Embed- ding[C]//Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.San Francisco:ACM,2016:1225-1234.
[11]IVANOV S,BURNAEV E.Anonymous Walk Embeddings [C]//35th International Conference on Machine Learning.Stockholm:ACM,2018:3448-3457.
[12]DUTTA A,RIBA P,LIADOS J,et al.Hierarchical Stochastic Graphlet Embedding for Graph-based Pattern Recognition[J].Neural Computing & Applications,2019,32(15):11596-11597.
[13]ROWEIS S,SAUL L.Nonlinear Dimensionality Reduction by Locally Linear Embedding[J].Science,2000,290(5500):2323-2326.
[14]BELKIN M,NIYOGI P.Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering[J].Advances in Neural Information Processing Systems,2001,14:585-591.
[15]CAO S,LU W,XU Q.GraRep:Learning Graph Representations with Global Structural Information[C]//International Confe-rence on Information and Knowledge Management.Melbourne:ACM,2015:891-900.
[16]MIKOLOV T,CHEN K,CORRADO G,et al.Efficient Estimation of Word Representations in Vector Space[C]//1st International Conference on Learning Representations.Arizona,2013.
[17]CHEN H,HU Y,PEROZZI B,et al.HARP-Hierarchical Representation Learning for Networks[C]//32nd AAAI Conference on Artificial Intelligence.New Orleans:AAAI,2018:2127-2134.
[18]CAO S,LU W,XU Q.Deep Neural Networks for Learning Graph Representations[C]//Thirtieth AAAI Conference on Artificial Intelligence.Arizona:AAAI,2016:1145-1152.
[19]KIPF T N,WELLING M.Variational Graph Auto-Encoders [C]//Proceedings of NIPS.Barcelona:MIT Press,2016.
[20]YANG C,LIU Z,ZHAO D,et al.Network Representation Learning with Rich Text Information[C]//IJCAI International Joint Conference on Artificial Intelligence.Buenos Aires:Morgan Kaufmann,2015:2111-2117.
[21]TU C,ZHANG W,LIU Z,et al.Max-Margin DeepWalk:Dis- criminative learning of network representation[C]//IJCAI International Joint Conference on Artificial Intelligence.New York:Morgan Kaufmann,2016:3889-3895.
[22]TU C,HAN L,LIU Z,et al.CANE:Context-Aware Network Embedding for Relation Modeling[C]//ACL 2017-55th Annual Meeting of the Association for Computational Linguistics.Vancouver:ACL,2017:1722-1731.
[23]TU C,WANG H,ZENG X,et al.Community-enhanced Network Representation Learning for Network Analysis[J].arXiv:1611.06645,2016.
[24]TOMAS M,IIYA S,KAI C,et al.Distributed Representations of Words and Phrases and Their Compositionality[C]//Proceedings of NIPS.Lake Tahoe:MIT Press,2013:3111-3119.
[25]TANG J,QU M,WANG M,et al.LINE:Large-scale Information Network Embedding[C]//Proceedings of the 24th International Conference on World Wide Web.New York:ACM,2015:1067-1077.
[26]LAHOTI P,GARIMELLA K,GIONIS A.Joint non-negative Matrix Factorization for Learning Ideological Leaning on Twitter[C]//WSDM'18:Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining.Los Ange-les:ACM,2018:351-359.
[27]TAN C,TANG J,SUN J,et al.Social Action Tracking Via Noise Tolerant Time-varying Factor Graphs[C]//Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.Washington:ACM,2010:1049-1058.
[28]CHEN J,ZHANG Q,HUANG X.Incorporate Group Information to Enhance Network Embedding[C]//International Confe-rence on Information and Knowledge Management.Pollis:ACM,2016:1901-1904.
[29]SUN X,GUO J,DING X,et al.A General Framework for Content-enhanced Network Representation Learning[J].arXiv:1610.02906v3,2016.
[30]CHO K,VAN M,GULCEHRE C,et al.Learning Phrase Repre- sentations using RNN Encoder-Decoder for Statistical Machine Translation[C]//EMNLP 2014-2014 Conference on Empirical Methods in Natural Language Processing.Doha:ACL,2014:1724-1734.
[31]SUTSKEVER I,VINYALS O,LE Q.Sequence to Sequence Learning with Neural Networks[C]//Advances in Neural Information Processing Systems.Montréal:MIT Press,2014:3104-3112.
[32]BAHDANAU D,CHO K,BENGIO Y.Neural Machine Translation by Jointly Learning to Align and Translate[C]//3rd International Conference on Learning Representations.San Diego,2015.
[33]LUONG M,PHAM H,MANNING C.Effective approaches to attention-based neural machine translation[C]//Conference Proceedings-EMNLP 2015:Conference on Empirical Methods in Natural Language Processing.Lisbon:ACL,2015:1412-1421.
[34]VASWANI A,SHAZEER N,PARMAR N,et al.Attention Is All You Need[C]//Advances in Neural Information Processing Systems.Long Beach:MIT Press,2017:5999-6009.
[35]CHENG J,DONG L,LAPATA M.Long Short-Term Memory- Networks for Machine Reading[C]//EMNLP 2016-Conference on Empirical Methods in Natural Language Processing.Austin:ACL,2016:551-561.
[36]VELICKOVIC P,CUCURULL G,CASANOVA A,et al.Graph Attention Networks[C]//6th International Conference on Learning Representations.Vancouver,2018.
[37]LIU Z,CHEN C,LI L,et al.Graph Neural Networks with Adaptive Receptive Paths[C]//33rd AAAI Conference on Artificial Intelligence.Xi'an:AAAI,2019:4424-4431.
[38]HU B,LU Z,LI H,et al.Convolutional neural network architectures for matching natural language sentences[C]//Advances in Neural Information Processing Systems 27-28th Annual Confe-rence on Neural Information Processing Systems 2014.Mon-treal:MIT Press,2014:2042-2050.
[39]TOMAS M,IIYA S,KAI C,et al.Distributed Representations of Words and Phrases and Their Compositionality[C]//Proceedings of NIPS.Lake Tahoe:MIT Press,2013:3111-3119.
[40]KINGMA D,BA J.Adam:A Method for Stochastic Optimization[C]//Proceedings of ICLR.Montreal,2015.
[41]MCCALLUM A,NIGAM K,RENNIE J,et al.Automating the Construction of Internet Portals with Machine Learning[J].Information Retrieval,2000,3(2):127-163.
[42]JURE L,JON K,CHRISTOS F.Graphs over Time:Densification Laws,Shrinking Diameters and Possible Explanations[C]//Proceedings of the Eleventh ACM SIGKDD International Confe-rence on Knowledge Discovery in Data Mining.Chicago:ACM,2005:177-187.
[43]AIROLDI E,BLEI D,FIENBERG S,et al.Mixed membership stochastic blockmodels[C]//Advances in Neural Information Processing Systems.Whistler:MIT Press,2008:1981-2014.
[44]HANLEY J,MCNEIL B.The Meaning and Use of the Area Under a Receiver Operating Characteristic(roc) Curve[J].Ra-diology,1982,143(1):29-36.
[45]FAN R,CHANG K,HSIEH C,et al.Liblinear:A Library for Large Linear Classification[J].Journal of Machine Learning Research,2008,9(9):1871-1874.
[1] ZHOU Le-yuan, ZHANG Jian-hua, YUAN Tian-tian, CHEN Sheng-yong. Sequence-to-Sequence Chinese Continuous Sign Language Recognition and Translation with Multi- layer Attention Mechanism Fusion [J]. Computer Science, 2022, 49(9): 155-161.
[2] CHEN Yong-quan, JIANG Ying. Analysis Method of APP User Behavior Based on Convolutional Neural Network [J]. Computer Science, 2022, 49(8): 78-85.
[3] ZHU Cheng-zhang, HUANG Jia-er, XIAO Ya-long, WANG Han, ZOU Bei-ji. Deep Hash Retrieval Algorithm for Medical Images Based on Attention Mechanism [J]. Computer Science, 2022, 49(8): 113-119.
[4] DAI Zhao-xia, LI Jin-xin, ZHANG Xiang-dong, XU Xu, MEI Lin, ZHANG Liang. Super-resolution Reconstruction of MRI Based on DNGAN [J]. Computer Science, 2022, 49(7): 113-119.
[5] LIU Yue-hong, NIU Shao-hua, SHEN Xian-hao. Virtual Reality Video Intraframe Prediction Coding Based on Convolutional Neural Network [J]. Computer Science, 2022, 49(7): 127-131.
[6] XU Ming-ke, ZHANG Fan. Head Fusion:A Method to Improve Accuracy and Robustness of Speech Emotion Recognition [J]. Computer Science, 2022, 49(7): 132-141.
[7] YANG Yue, FENG Tao, LIANG Hong, YANG Yang. Image Arbitrary Style Transfer via Criss-cross Attention [J]. Computer Science, 2022, 49(6A): 345-352.
[8] YANG Jian-nan, ZHANG Fan. Classification Method for Small Crops Combining Dual Attention Mechanisms and Hierarchical Network Structure [J]. Computer Science, 2022, 49(6A): 353-357.
[9] ZHANG Jia-hao, LIU Feng, QI Jia-yin. Lightweight Micro-expression Recognition Architecture Based on Bottleneck Transformer [J]. Computer Science, 2022, 49(6A): 370-377.
[10] WANG Jian-ming, CHEN Xiang-yu, YANG Zi-zhong, SHI Chen-yang, ZHANG Yu-hang, QIAN Zheng-kun. Influence of Different Data Augmentation Methods on Model Recognition Accuracy [J]. Computer Science, 2022, 49(6A): 418-423.
[11] SUN Jie-qi, LI Ya-feng, ZHANG Wen-bo, LIU Peng-hui. Dual-field Feature Fusion Deep Convolutional Neural Network Based on Discrete Wavelet Transformation [J]. Computer Science, 2022, 49(6A): 434-440.
[12] WU Zi-bin, YAN Qiao. Projected Gradient Descent Algorithm with Momentum [J]. Computer Science, 2022, 49(6A): 178-183.
[13] ZHAO Zheng-peng, LI Jun-gang, PU Yuan-yuan. Low-light Image Enhancement Based on Retinex Theory by Convolutional Neural Network [J]. Computer Science, 2022, 49(6): 199-209.
[14] HU Fu-yuan, WAN Xin-jun, SHEN Ming-fei, XU Jiang-lang, YAO Rui, TAO Zhong-ben. Survey Progress on Image Instance Segmentation Methods of Deep Convolutional Neural Network [J]. Computer Science, 2022, 49(5): 10-24.
[15] XU Hua-chi, SHI Dian-xi, CUI Yu-ning, JING Luo-xi, LIU Cong. Time Information Integration Network for Event Cameras [J]. Computer Science, 2022, 49(5): 43-49.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!