Computer Science ›› 2022, Vol. 49 ›› Issue (5): 200-205.doi: 10.11896/jsjkx.210300198

• Artificial Intelligence • Previous Articles     Next Articles

Dialogue-based Entity Relation Extraction with Knowledge

LU Liang, KONG Fang   

  1. School of Computer Science and Technology,Soochow University,Suzhou,Jiangsu 215006,China
  • Received:2021-03-19 Revised:2021-07-16 Online:2022-05-15 Published:2022-05-06
  • About author:LU Liang,born in 1995,postgraduate,is a member of China Computer Federation.His main research interests include natural language processing and so on.
    KONG Fang,born in 1977,Ph.D,professor,Ph.D supervisor,is a member of China Computer Federation.Her main research interests include natural language processing and discourse ana-lysis.
  • Supported by:
    General Program of National Natural Science Foundation of China(61876118) and Key Program of National Natural Foundation of China(61836007).

Abstract: Entity relation extraction aims to extract semantic relations between entities from text.Up to now,related work on entity relation extraction mainly focuses on written texts,such as news and Wikipedia text,and has achieved considerable success.However,the research for dialogue texts is still in initial stage.At present,the dialogue corpus used for entity relation extraction is small in scale and low in information density,so it is difficult to capture effective features.The deep learning model does not associate knowledge like human beings,so it is difficult to understand the dialogue content in detail and depth simply by increasing the amount of annotation data and enhancing the computing power.In response to the above problems,this paper proposes a knowledge-integrated entity relation extraction model,which uses Star-Transformer to effectively capture features from dialogue texts,and constructs a relation set containing relations and their semantic keywords through the co-occurrence of keywords.The important relation features obtained by calculating the correlation between the set and dialogue text are integrated into the model as knowledge.Experiment results on the DialogRE dataset show that the F1 value is 53.6% and the F1c value is 49.5%,which proves the effectiveness of proposed method.

Key words: Attention mechanism, Dialogue context, Entity relation extraction, Integrate knowledge, Transformer

CLC Number: 

  • TP391
[1]GOLSHAN P N,DASHTI H A R,AZIZI S,et al.A study of recent contributions on information extraction[J].arXiv:1803.05667,2018.
[2]JI S,PAN S,CAMBRIA E,et al.A survey on knowledgegraphs:Representation,acquisition and applications[J].arXiv:2002.00388,2020.
[3]ASAI A,HASHIMOTO K,HAJISHIRZI H,et al.Learning to retrieve reasoning paths over wikipedia graph for question answering[J].arXiv:1911.10470,2019.
[4]ZHOU X,LI L,DONG D,et al.Multi-turn response selectionfor chatbots with deep attention matching network[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.2018:1118-1127.
[5]YU D,SUN K,CARDIE C,et al.Dialogue-based relation extraction[J].arXiv:2004.08056,2020.
[6]YANG A,WANG Q,LIU J,et al.Enhancing pre-trained language representations with rich knowledge for machine reading comprehension[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.2019:2346-2357.
[7]DEVLIN J,CHANG M W,LEE K,et al.Bert:Pre-training of deep bidirectional transformers for language understanding[J].arXiv:1810.04805,2018.
[8]WANG X,WANG D,XU C,et al.Explainable reasoning over knowledge graphs for recommendation[J].Proceedings of the AAAI Conference on Artificial Intelligence,2019,33(1):5329-5336.
[9]HOU F,WANG R,HE J,et al.Improving Entity Linkingthrough Semantic Reinforced Entity Embeddings[C]//Procee-dings of the 58th Annual Meeting of the Association for Computational Linguistics.2020:6843-6848.
[10]GUO Q,QIU X,LIU P,et al.Star-transformer[J].arXiv:1902.09113,2019.
[11]ZENG D,LIU K,LAI S,et al.Relation classification via convolutional deep neural network[C]//25th International Confe-rence on Computational Linguistics:Technical Papers(COLING 2014).2014:2335-2344.
[12]KATIYAR A,CARDIE C.Going out on a limb:Joint extraction of entity mentions and relations without dependency trees[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics.2017:917-928.
[13]WANG L,CAO Z,DE MELO G,et al.Relation classification via multi-level attention cnns[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics.2016:1298-1307.
[14]ZHANG Y,QI P,MANNING C D.Graph convolution overpruned dependency trees improves relation extraction[J].ar-Xiv:1809.10185,2018.
[15]ZHANG Y,GUO Z,LU W.Attention guided graph convolutional networks for relation extraction[J].arXiv:1906.07510,2019.
[16]WANG H,TAN M,YU M,et al.Extracting multiple-relations in one-pass with pre-trained transformers[J].arXiv:1902.01030,2019.
[17]LI X,YIN F,SUN Z,et al.Entity-relation extraction as multi-turn question answering[J].arXiv:1905.05529,2019.
[18]LIU X,LIU K,LI X,et al.An iterative multi-source mutual knowledge transfer framework for machine reading comprehension[C]//Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence.2020:1-7.
[19]QUIRK C,POON H.Distant supervision for relation extraction beyond the sentence boundary[J].arXiv:1609.04873,2016.
[20]YAO Y,YE D,LI P,et al.DocRED:A large-scale document-level relation extraction dataset[J].arXiv:1906.06127,2019.
[21]VASWANI A,SHAZEER N,PARMAR N,et al.Attention isall you need[J].arXiv:1706.03762,2017.
[22]YAO L,TORABI A,CHO K,et al.Video description generation incorporating spatio-temporal features and a soft-attention mechanism[J].arXiv:1502.08029,2015.
[23]PENNINGTON J,SOCHER R,MANNING C D.Glove:Global vectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Proces-sing (EMNLP).2014:1532-1543.
[24]KINGMA D P,BA J.Adam:A method for stochastic optimization[J].arXiv:1412.6980,2014.
[1] ZHOU Fang-quan, CHENG Wei-qing. Sequence Recommendation Based on Global Enhanced Graph Neural Network [J]. Computer Science, 2022, 49(9): 55-63.
[2] DAI Yu, XU Lin-feng. Cross-image Text Reading Method Based on Text Line Matching [J]. Computer Science, 2022, 49(9): 139-145.
[3] ZHOU Le-yuan, ZHANG Jian-hua, YUAN Tian-tian, CHEN Sheng-yong. Sequence-to-Sequence Chinese Continuous Sign Language Recognition and Translation with Multi- layer Attention Mechanism Fusion [J]. Computer Science, 2022, 49(9): 155-161.
[4] XIONG Li-qin, CAO Lei, LAI Jun, CHEN Xi-liang. Overview of Multi-agent Deep Reinforcement Learning Based on Value Factorization [J]. Computer Science, 2022, 49(9): 172-182.
[5] RAO Zhi-shuang, JIA Zhen, ZHANG Fan, LI Tian-rui. Key-Value Relational Memory Networks for Question Answering over Knowledge Graph [J]. Computer Science, 2022, 49(9): 202-207.
[6] ZHU Cheng-zhang, HUANG Jia-er, XIAO Ya-long, WANG Han, ZOU Bei-ji. Deep Hash Retrieval Algorithm for Medical Images Based on Attention Mechanism [J]. Computer Science, 2022, 49(8): 113-119.
[7] SUN Qi, JI Gen-lin, ZHANG Jie. Non-local Attention Based Generative Adversarial Network for Video Abnormal Event Detection [J]. Computer Science, 2022, 49(8): 172-177.
[8] YAN Jia-dan, JIA Cai-yan. Text Classification Method Based on Information Fusion of Dual-graph Neural Network [J]. Computer Science, 2022, 49(8): 230-236.
[9] WANG Ming, PENG Jian, HUANG Fei-hu. Multi-time Scale Spatial-Temporal Graph Neural Network for Traffic Flow Prediction [J]. Computer Science, 2022, 49(8): 40-48.
[10] JIANG Meng-han, LI Shao-mei, ZHENG Hong-hao, ZHANG Jian-peng. Rumor Detection Model Based on Improved Position Embedding [J]. Computer Science, 2022, 49(8): 330-335.
[11] JIN Fang-yan, WANG Xiu-li. Implicit Causality Extraction of Financial Events Integrating RACNN and BiLSTM [J]. Computer Science, 2022, 49(7): 179-186.
[12] XIONG Luo-geng, ZHENG Shang, ZOU Hai-tao, YU Hua-long, GAO Shang. Software Self-admitted Technical Debt Identification with Bidirectional Gate Recurrent Unit and Attention Mechanism [J]. Computer Science, 2022, 49(7): 212-219.
[13] PENG Shuang, WU Jiang-jiang, CHEN Hao, DU Chun, LI Jun. Satellite Onboard Observation Task Planning Based on Attention Neural Network [J]. Computer Science, 2022, 49(7): 242-247.
[14] ZHANG Ying-tao, ZHANG Jie, ZHANG Rui, ZHANG Wen-qiang. Photorealistic Style Transfer Guided by Global Information [J]. Computer Science, 2022, 49(7): 100-105.
[15] ZENG Zhi-xian, CAO Jian-jun, WENG Nian-feng, JIANG Guo-quan, XU Bin. Fine-grained Semantic Association Video-Text Cross-modal Entity Resolution Based on Attention Mechanism [J]. Computer Science, 2022, 49(7): 106-112.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!