计算机科学 ›› 2021, Vol. 48 ›› Issue (12): 188-194.doi: 10.11896/jsjkx.210100203

• 数据库&大数据&数据科学 • 上一篇    下一篇

基于全局注意力机制的属性网络表示学习

许营坤, 马放南, 杨旭华, 叶蕾   

  1. 浙江工业大学计算机科学与技术学院 杭州310023
  • 收稿日期:2021-01-26 修回日期:2021-04-08 出版日期:2021-12-15 发布日期:2021-11-26
  • 通讯作者: 叶蕾(yelei@zjut.edu.cn)
  • 作者简介:xyk@zjut.edu.cn
  • 基金资助:
    国家自然科学基金项目(61773348);浙江省自然科学基金(LY20F020029)

Attribute Network Representation Learning Based on Global Attention

XU Ying-kun, MA Fang-nan, YANG Xu-hua, YE Lei   

  1. College of Computer Science and Technology,Zhejiang University of Technology,Hangzhou 310023,China
  • Received:2021-01-26 Revised:2021-04-08 Online:2021-12-15 Published:2021-11-26
  • About author:XU Ying-kun,born in 1979,Ph.D,lecturer,is a member of China Computer Federation.His main research interests include machine learning and classification algorithm.
    YE Lei,born in 1979,Ph.D,associate professor.Her main research interests include machine learning and so on.
  • Supported by:
    National Natural Science Foundation of China(61773348) and Zhejiang Province Natural Science Foundation of China(LY20F020029).

摘要: 属性网络不仅具有复杂的拓扑结构,其节点还包含丰富的属性信息。属性网络表示学习方法同时提取网络拓扑结构和节点的属性信息来学习大型属性网络的低维向量表示,在节点分类、链路预测和社区识别等网络分析技术方面具有非常重要和广泛的应用。文中首先根据属性网络的拓扑结构得到网络的结构嵌入向量;接着通过全局注意力机制来学习相邻节点的属性信息,先用卷积神经网络对节点的属性信息作卷积操作得到隐藏向量,再对卷积的隐藏向量生成全局注意力的权重向量和相关性矩阵,进而得到节点的属性嵌入向量;最后将结构嵌入向量和属性嵌入向量连接得到同时反映网络结构和节点属性的联合嵌入向量。在3个真实数据集上,将提出的新算法与当前的8种知名网络表示学习模型在链路预测和节点分类等任务上进行比较,实验结果表明新算法具有良好的属性网络表示效果。

关键词: 结构嵌入, 卷积神经网络, 联合嵌入, 全局注意力, 属性嵌入

Abstract: The attribute network not only has complex topology,its nodes also contain rich attribute information.Attribute network represent learning methods simultaneously extracts network topology and node attribute information to learn low-dimensional vector embedding of large attribute networks.It has very important and extensive applications in network analysis techniques such as node classification,link prediction and community identification.In this paper,we first obtain the embedded vector of the network structure according to the topology of the attribute network.Then we propose to learn the attribute information of adjacent nodes through global attention mechanism,use convolutional neural network to convolve the attribute information of the node to obtain the hidden vectors,and the weight vector and correlation matrix of global attention are generated from the hidden vectors of convolution,and then the attribute embedding vector of nodes is obtained.Finally,the structure embedding vector and the attribute embedding vector are connected to obtain the joint embedding vector which reflects the network structure and the attribute simultaneously.On three real data sets,the new algorithm is compared with the current eight network embedding models on tasks such as link prediction and node classification.Experimental results show that the new algorithm has good attribute network embedding effects.

Key words: Attribute embedding, Convolutional neural network, Global attention mechanism, Joint embedding, Structure embedding

中图分类号: 

  • TP391
[1]PEROZZI B,AL-RFOU R,SKIENA S.DeepWalk:Online Learning of Social Representations[C]//Proceedings of the ACM SIGKDD International Conference on Knowledge Disco-very and Data Mining.New York:ACM,2014:701-710.
[2]KIP F T N,WELLING M.Semi-supervised Classification with Graph Convolutional Networks[C]//5th International Confe-rence on Learning Representations.ShangHai,2017.
[3]WANG C,PAN S,LONG G,et al.MGAE:Marginalized Graph Autoencoder for Graph Clustering[C]//International Confe-rence on Information and Knowledge Management.Singapore:ACM,2017:889-898.
[4]XIONG F,WANG X,PAN S,et al.Social Recommendation with Evolutionary Opinion Dynamics[J].IEEE Transactions on Systems Man Cybernetics-Systems,2018,50(10):3804-3816.
[5] ZHAO X,LI X,ZHANG Z H,et al.Community discovery algorithm combining community embedding and node embedding[J].Computer Science,2020,47(12):279-284.
[6]CAI H,ZHENG V,CHANG K.A Comprehensive Survey of Graph Embedding:Problems,Techniques and Applications[J].IEEE Transactions on Knowledge and Data Engineering,2018,30(9):1616-1637.
[7]SHI C,HU B,ZHAO W,et al.Heterogeneous Information Network Embedding for Recommendation[J].IEEE Transactions on Knowledge and Data Engineering,2019,31(2):357-370.
[8]ZHOU L E,YOU J G.Social Recommendation with Embedding of Summarized Graphs[J].Journal of Chinese Computer Systems,2021,42(1):78-84.
[9]GROVER A,LESKOVEC J.Node2vec:Scalable Feature Lear- ning for Networks[C]//Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mi-ning.San Francisco:ACM,2016:855-864.
[10]WANG D,CUI P,ZHU W.Structural Deep Network Embed- ding[C]//Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.San Francisco:ACM,2016:1225-1234.
[11]IVANOV S,BURNAEV E.Anonymous Walk Embeddings [C]//35th International Conference on Machine Learning.Stockholm:ACM,2018:3448-3457.
[12]DUTTA A,RIBA P,LIADOS J,et al.Hierarchical Stochastic Graphlet Embedding for Graph-based Pattern Recognition[J].Neural Computing & Applications,2019,32(15):11596-11597.
[13]ROWEIS S,SAUL L.Nonlinear Dimensionality Reduction by Locally Linear Embedding[J].Science,2000,290(5500):2323-2326.
[14]BELKIN M,NIYOGI P.Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering[J].Advances in Neural Information Processing Systems,2001,14:585-591.
[15]CAO S,LU W,XU Q.GraRep:Learning Graph Representations with Global Structural Information[C]//International Confe-rence on Information and Knowledge Management.Melbourne:ACM,2015:891-900.
[16]MIKOLOV T,CHEN K,CORRADO G,et al.Efficient Estimation of Word Representations in Vector Space[C]//1st International Conference on Learning Representations.Arizona,2013.
[17]CHEN H,HU Y,PEROZZI B,et al.HARP-Hierarchical Representation Learning for Networks[C]//32nd AAAI Conference on Artificial Intelligence.New Orleans:AAAI,2018:2127-2134.
[18]CAO S,LU W,XU Q.Deep Neural Networks for Learning Graph Representations[C]//Thirtieth AAAI Conference on Artificial Intelligence.Arizona:AAAI,2016:1145-1152.
[19]KIPF T N,WELLING M.Variational Graph Auto-Encoders [C]//Proceedings of NIPS.Barcelona:MIT Press,2016.
[20]YANG C,LIU Z,ZHAO D,et al.Network Representation Learning with Rich Text Information[C]//IJCAI International Joint Conference on Artificial Intelligence.Buenos Aires:Morgan Kaufmann,2015:2111-2117.
[21]TU C,ZHANG W,LIU Z,et al.Max-Margin DeepWalk:Dis- criminative learning of network representation[C]//IJCAI International Joint Conference on Artificial Intelligence.New York:Morgan Kaufmann,2016:3889-3895.
[22]TU C,HAN L,LIU Z,et al.CANE:Context-Aware Network Embedding for Relation Modeling[C]//ACL 2017-55th Annual Meeting of the Association for Computational Linguistics.Vancouver:ACL,2017:1722-1731.
[23]TU C,WANG H,ZENG X,et al.Community-enhanced Network Representation Learning for Network Analysis[J].arXiv:1611.06645,2016.
[24]TOMAS M,IIYA S,KAI C,et al.Distributed Representations of Words and Phrases and Their Compositionality[C]//Proceedings of NIPS.Lake Tahoe:MIT Press,2013:3111-3119.
[25]TANG J,QU M,WANG M,et al.LINE:Large-scale Information Network Embedding[C]//Proceedings of the 24th International Conference on World Wide Web.New York:ACM,2015:1067-1077.
[26]LAHOTI P,GARIMELLA K,GIONIS A.Joint non-negative Matrix Factorization for Learning Ideological Leaning on Twitter[C]//WSDM'18:Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining.Los Ange-les:ACM,2018:351-359.
[27]TAN C,TANG J,SUN J,et al.Social Action Tracking Via Noise Tolerant Time-varying Factor Graphs[C]//Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.Washington:ACM,2010:1049-1058.
[28]CHEN J,ZHANG Q,HUANG X.Incorporate Group Information to Enhance Network Embedding[C]//International Confe-rence on Information and Knowledge Management.Pollis:ACM,2016:1901-1904.
[29]SUN X,GUO J,DING X,et al.A General Framework for Content-enhanced Network Representation Learning[J].arXiv:1610.02906v3,2016.
[30]CHO K,VAN M,GULCEHRE C,et al.Learning Phrase Repre- sentations using RNN Encoder-Decoder for Statistical Machine Translation[C]//EMNLP 2014-2014 Conference on Empirical Methods in Natural Language Processing.Doha:ACL,2014:1724-1734.
[31]SUTSKEVER I,VINYALS O,LE Q.Sequence to Sequence Learning with Neural Networks[C]//Advances in Neural Information Processing Systems.Montréal:MIT Press,2014:3104-3112.
[32]BAHDANAU D,CHO K,BENGIO Y.Neural Machine Translation by Jointly Learning to Align and Translate[C]//3rd International Conference on Learning Representations.San Diego,2015.
[33]LUONG M,PHAM H,MANNING C.Effective approaches to attention-based neural machine translation[C]//Conference Proceedings-EMNLP 2015:Conference on Empirical Methods in Natural Language Processing.Lisbon:ACL,2015:1412-1421.
[34]VASWANI A,SHAZEER N,PARMAR N,et al.Attention Is All You Need[C]//Advances in Neural Information Processing Systems.Long Beach:MIT Press,2017:5999-6009.
[35]CHENG J,DONG L,LAPATA M.Long Short-Term Memory- Networks for Machine Reading[C]//EMNLP 2016-Conference on Empirical Methods in Natural Language Processing.Austin:ACL,2016:551-561.
[36]VELICKOVIC P,CUCURULL G,CASANOVA A,et al.Graph Attention Networks[C]//6th International Conference on Learning Representations.Vancouver,2018.
[37]LIU Z,CHEN C,LI L,et al.Graph Neural Networks with Adaptive Receptive Paths[C]//33rd AAAI Conference on Artificial Intelligence.Xi'an:AAAI,2019:4424-4431.
[38]HU B,LU Z,LI H,et al.Convolutional neural network architectures for matching natural language sentences[C]//Advances in Neural Information Processing Systems 27-28th Annual Confe-rence on Neural Information Processing Systems 2014.Mon-treal:MIT Press,2014:2042-2050.
[39]TOMAS M,IIYA S,KAI C,et al.Distributed Representations of Words and Phrases and Their Compositionality[C]//Proceedings of NIPS.Lake Tahoe:MIT Press,2013:3111-3119.
[40]KINGMA D,BA J.Adam:A Method for Stochastic Optimization[C]//Proceedings of ICLR.Montreal,2015.
[41]MCCALLUM A,NIGAM K,RENNIE J,et al.Automating the Construction of Internet Portals with Machine Learning[J].Information Retrieval,2000,3(2):127-163.
[42]JURE L,JON K,CHRISTOS F.Graphs over Time:Densification Laws,Shrinking Diameters and Possible Explanations[C]//Proceedings of the Eleventh ACM SIGKDD International Confe-rence on Knowledge Discovery in Data Mining.Chicago:ACM,2005:177-187.
[43]AIROLDI E,BLEI D,FIENBERG S,et al.Mixed membership stochastic blockmodels[C]//Advances in Neural Information Processing Systems.Whistler:MIT Press,2008:1981-2014.
[44]HANLEY J,MCNEIL B.The Meaning and Use of the Area Under a Receiver Operating Characteristic(roc) Curve[J].Ra-diology,1982,143(1):29-36.
[45]FAN R,CHANG K,HSIEH C,et al.Liblinear:A Library for Large Linear Classification[J].Journal of Machine Learning Research,2008,9(9):1871-1874.
[1] 周乐员, 张剑华, 袁甜甜, 陈胜勇.
多层注意力机制融合的序列到序列中国连续手语识别和翻译
Sequence-to-Sequence Chinese Continuous Sign Language Recognition and Translation with Multi- layer Attention Mechanism Fusion
计算机科学, 2022, 49(9): 155-161. https://doi.org/10.11896/jsjkx.210800026
[2] 李宗民, 张玉鹏, 刘玉杰, 李华.
基于可变形图卷积的点云表征学习
Deformable Graph Convolutional Networks Based Point Cloud Representation Learning
计算机科学, 2022, 49(8): 273-278. https://doi.org/10.11896/jsjkx.210900023
[3] 陈泳全, 姜瑛.
基于卷积神经网络的APP用户行为分析方法
Analysis Method of APP User Behavior Based on Convolutional Neural Network
计算机科学, 2022, 49(8): 78-85. https://doi.org/10.11896/jsjkx.210700121
[4] 朱承璋, 黄嘉儿, 肖亚龙, 王晗, 邹北骥.
基于注意力机制的医学影像深度哈希检索算法
Deep Hash Retrieval Algorithm for Medical Images Based on Attention Mechanism
计算机科学, 2022, 49(8): 113-119. https://doi.org/10.11896/jsjkx.210700153
[5] 檀莹莹, 王俊丽, 张超波.
基于图卷积神经网络的文本分类方法研究综述
Review of Text Classification Methods Based on Graph Convolutional Network
计算机科学, 2022, 49(8): 205-216. https://doi.org/10.11896/jsjkx.210800064
[6] 金方焱, 王秀利.
融合RACNN和BiLSTM的金融领域事件隐式因果关系抽取
Implicit Causality Extraction of Financial Events Integrating RACNN and BiLSTM
计算机科学, 2022, 49(7): 179-186. https://doi.org/10.11896/jsjkx.210500190
[7] 张颖涛, 张杰, 张睿, 张文强.
全局信息引导的真实图像风格迁移
Photorealistic Style Transfer Guided by Global Information
计算机科学, 2022, 49(7): 100-105. https://doi.org/10.11896/jsjkx.210600036
[8] 戴朝霞, 李锦欣, 张向东, 徐旭, 梅林, 张亮.
基于DNGAN的磁共振图像超分辨率重建算法
Super-resolution Reconstruction of MRI Based on DNGAN
计算机科学, 2022, 49(7): 113-119. https://doi.org/10.11896/jsjkx.210600105
[9] 刘月红, 牛少华, 神显豪.
基于卷积神经网络的虚拟现实视频帧内预测编码
Virtual Reality Video Intraframe Prediction Coding Based on Convolutional Neural Network
计算机科学, 2022, 49(7): 127-131. https://doi.org/10.11896/jsjkx.211100179
[10] 徐鸣珂, 张帆.
Head Fusion:一种提高语音情绪识别的准确性和鲁棒性的方法
Head Fusion:A Method to Improve Accuracy and Robustness of Speech Emotion Recognition
计算机科学, 2022, 49(7): 132-141. https://doi.org/10.11896/jsjkx.210100085
[11] 杨玥, 冯涛, 梁虹, 杨扬.
融合交叉注意力机制的图像任意风格迁移
Image Arbitrary Style Transfer via Criss-cross Attention
计算机科学, 2022, 49(6A): 345-352. https://doi.org/10.11896/jsjkx.210700236
[12] 杨健楠, 张帆.
一种结合双注意力机制和层次网络结构的细碎农作物分类方法
Classification Method for Small Crops Combining Dual Attention Mechanisms and Hierarchical Network Structure
计算机科学, 2022, 49(6A): 353-357. https://doi.org/10.11896/jsjkx.210200169
[13] 杨涵, 万游, 蔡洁萱, 方铭宇, 吴卓超, 金扬, 钱伟行.
基于步态分类辅助的虚拟IMU的行人导航方法
Pedestrian Navigation Method Based on Virtual Inertial Measurement Unit Assisted by GaitClassification
计算机科学, 2022, 49(6A): 759-763. https://doi.org/10.11896/jsjkx.211200148
[14] 孙福权, 崔志清, 邹彭, 张琨.
基于多尺度特征的脑肿瘤分割算法
Brain Tumor Segmentation Algorithm Based on Multi-scale Features
计算机科学, 2022, 49(6A): 12-16. https://doi.org/10.11896/jsjkx.210700217
[15] 吴子斌, 闫巧.
基于动量的映射式梯度下降算法
Projected Gradient Descent Algorithm with Momentum
计算机科学, 2022, 49(6A): 178-183. https://doi.org/10.11896/jsjkx.210500039
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!