Computer Science ›› 2025, Vol. 52 ›› Issue (9): 282-293.doi: 10.11896/jsjkx.240700201

• Artificial Intelligence • Previous Articles     Next Articles

Knowledge Graph Completion Model Using Semantically Enhanced Prompts and Structural Information

CAI Qihang, XU Bin, DONG Xiaodi   

  1. School of Computer Science and Engineering,Northeastern University,Shenyang 110169,China
  • Received:2024-07-31 Revised:2024-10-21 Online:2025-09-15 Published:2025-09-11
  • About author:CAI Qihang,born in 1999,postgra-duate,is a member of CCF(No.U9520G).Her main research interest includes knowledge graph completion.
    XU Bin,born in 1980,Ph.D,associate professor,is a member of CCF(No.21664S).His main research interests include artificial intelligence and smart education.
  • Supported by:
    National Natural Science Foundation of China(62137001) and Liaoning Natural Science Foundation(2022-MS-119).

Abstract: Knowledge graph completion aims to infer new facts based on existing facts,enhance the comprehensiveness and reliability of the knowledge graph,and thus improve its practical value.In order to solve the problems that existing methods based on pre-trained language models have large differences in the prediction effects of head and tail entities,large fluctuations in the training process due to the stochastic initialization of consecutive prompts,and under-utilization of structural information of the know-ledge graph,this paper proposes the knowledge graph completion model using semantically enhanced prompts and structural information(SEPS-KGC).The model follows a multi-task learning framework that unites the knowledge graph completion task with the entity prediction task.Firstly,an example-guided relationship templates generation method is designed to generate two more targeted relationship prompt templates using a large language model for the different tasks of predicting head entities and predicting tail entities,and incorporating semantic auxiliary information to enable the model to better understand the semantic associations between entities.Secondly,a prompt learning method based on effective initialization is designed,using pre-trained embeddings of relational labels for initialization.Finally,a structural information extraction module is designed to extract knowledge graph structural information using convolution and pooling operations to improve the stability and relationship understanding of the model.The effectiveness of SEPS-KGC is demonstrated on two public datasets.

Key words: Knowledge graph, Knowledge graph completion, Pre-trained language model, Large language model, Prompt learning, Structural information

CLC Number: 

  • TP391
[1]XIONG C Y,POWER R,CALLAN J.Explicit semantic ranking for academic search via knowledge graph embedding[C]//Proceedings of the 26th International Conference on World Wide Web.ACM,2017:1271-1279.
[2]SAXENA A,TRIPATHI A,TALUKDAR P.Improving multi-hop question answering overknowledge graphs using knowledge base embeddings[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.Stroudsburg:ACL,2020:4498-4507.
[3]WANG H W,ZHANG F Z,ZHANG M D,et al.Knowledge-aware graph neural networks with label smoothness regularization for recommender systems[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery Data Mining.New York:ACM,2019:968-977.
[4]BOLLACKER K,EVANS C,PARITOSH P,et al.FreeBase:a collaboratively created graph database for structuring human knowledge[C]//Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data.ACM,2008:1247-1250.
[5]VRANDEČIĆ D,KRÖTZSCH M.Wikidata:a free collaborative knowledgebase[J].Communications of the ACM,2014,57(10):78-85.
[6]LEHMANN J,ISELE R,JAKBO M,et al.DBpedia-a large-scale,multilingual knowledge base extracted from Wikipedia[J].Semantic Web,2015,6(2):167-195.
[7]SUCHANEK F M,KASNECI G,WEIKUM G.Yago:a core of semantic knowledge[C]//Proceedings of the 16th International World Wide Web Conference.New York:ACM,2007:697-706.
[8]DONG X,GABRILOVICH E,HEITZ G,et al.Knowledgevault:a web-scale approach to probabilistic knowledge fusion[C]//Proceedings of the 20th ACM SI-GKDD International Conference on Knowledge Discovery and Data Mining.New York:ACM,2014:601-610.
[9]WEST R,GABRILOVICH E,MURPHY K,et al.Knowledge base completion via search-based question answering[C]//Proceedings of the 23rd International Conference on World Wide Web.ACM,2014:515-526.
[10]YAO L,MAO C S,LUO Y.KG-BERT:BERT for knowledge graph completion[J].arXiv:1909.03193,2019.
[11]LYU X,LIN Y K,CAO Y X,et al.Do pre-trained models benefit knowledge graph completion? a reliable evaluation and a reasonable approach[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.Stroudsburg:ACL,2022:3570-3581.
[12]CHOI B,JANG D,KO Y.MEM-KGC:masked entity model for knowledge graph completion with pre-trained language model[J].IEEE Access,2021,9:132025-132032.
[13]LIU X,ZHENG Y N,DU Z X,et al.Gpt understands,too[J].arXiv:2103.10385,2021.
[14]CHEN C,WANG Y F,SUN A X,et al.Dipping PLMs sauce:bridging structure and text for effectiveknowledge graph completion via conditional soft prompting[C]//Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics.Stroudsburg:ACL,2023:11489-11503.
[15]LIU X D,HE P C,CHEN W Z,et al.Multi-task deep neural networks for natural language understanding[C]//Proceedings of the 57th Annual Meeting of the Association for ComputationalLinguistics.Stroudsburg:ACL,2019:4487-4496.
[16]OPENAI,ACHIAM J,ADLER S,et al.GPT-4 technical report[J].arXiv:2303.08774,2023.
[17]TOUVRON H,LAVRIL T,IZACARD G,et al.LLaMA:open and efficient foundation language models[J].arXiv:2302.13971,2023.
[18]ZENG A H,LIU X,DU Z X,et al.GLM-130B:an open bilingual pre-trained model[J].arXiv:2210.02414,2022.
[19]ZHAO W X,ZHOU K,LI J Y,et al.A survey of large language models[J].arXiv:2303.18223,2023.
[20]BORDES A,USUNIER N,GARCIA-DURAN A,et al.Translating embeddings for modeling multi-relational data[C]//Advances in Neural Information ProcessingSystems.2013:2787-2795.
[21]YANY B,HE X.Embedding entities and relations for learning and inference in knowledge bases[C]//Proceedings of the 2015 International Conference on Learning Representations.2015:1412-1420.
[22]SUN Z Q,DENG Z H,NIE J Y,et al.RotatE:knowledge graph embedding by relational rotation in complex space[C]//Proceedings of the 7th International Conference on Learning Representations.2019:1-18.
[23]DETTMERS T,MINERVINI P,STENETORP P,et al.Convolutional 2d knowledge graph embeddings[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence,the 30th Innovative Applications of Artificial Intelligence,and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence.Menlo Park:AAAI,2018:1811-1818.
[24]NGUYEN D Q,NGUYEN T D,NEUYEN D Q,et al.A novel embedding model for knowledge base completion based on convolutional neural network[J].arXiv:1712.02121,2017.
[25]NATHANI D,CHAUHAN J,SHARMA C,et al.Learning attention-based embeddings for relation prediction in knowledge graphs[J].arXiv:1906.01195,2019.
[26]DEVLIN J,CHANG M W,LEE K,et al.BERT:pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.Stroudsburg:ACL,2019:4171-4186.
[27]JU J H,YANY D Q,LIU J P.Commonsense knowledge base completion with relational graph attention network and pre-trained language model[C]//Proceedings of the 31st ACM International Conference on Information and Knowledge Management Association for Computing Machinery.New York:ACM,2022:4104-4108.
[28]SUN W,LI Y F,YAO J F,et al.Combining structure embedding and text semantics for efficient knowledge graph completion[C]//Proceedings of the 35th International Conference on Software Engineering and Knowledge Engineering.2023:317-322.
[29]ZHU Y Q,WANG X H,CHEN J,et al.Llms for knowledge graph construction and reasoning:recent capabilities and future opportunities[J].arXiv:2305.13168,2023.
[30]YAO L,PENG J Z,MAO C S,et al.Exploring large language models for knowledge graph completion[J].arXiv:2308.13916,2023.
[31]WEI Y B,HUANG Q S,ZHANG Y,et al.KICGPT:large language model with knowledge in context for knowledge graph completion[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing.Stroudsburg:ACL,2023:8667-8683.
[32]AKRAMI F,SAEEF M S,ZHANG Q,et al.Realistic re-evaluation of knowledge graph completion methods:an experimental study[J].arXiv:2003.08001,2020.
[33]BALAŽEVIC I,ALLEN C,HOSPEDALES T.Tucker:tensor factorization for knowledge graph completion[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing.2019:5185-5194.
[34]VASHISHTH S,SANYAL S,NITIN N,et al.Compositionbased multi-relational graph convolutional networks[C]//Proceedings of the 8th International Conference on Learning Representations.2020.
[35]ZHU Y L,LIU J T,RAO Z Y,et al.Knowledge Reasoning Model Combining HousE with Attention Mechanism[J].Computer Science,2024,51(S1):147-154.
[36]HU B H,ZHANG J P,CHEN H C.Knowledge Graph Completion Algorithm Based on Generative Adversarial Network and Positiveand Unlabeled Learning[J].Computer Science,2024,51(1):310-315.
[37]KIM B,HONG T,KO Y,et al.Multi-task learning for knowledge graph completion with pre-trained language models[C]//Proceedings of the 28th International Conference on Computational Linguistics.2020:1737-1743.
[38]CHEN C,WANG Y,LI B,et al.Knowledge is flat:a seq2seqgenerative framework for various knowledge graph completion[C]//Proceedings of the 29th International Conference on Computational Linguistics.Stroudsburg:ACL,2022:4005-4017.
[39]LIU X Y,WANG Z X,SUN Y,et al.ISA-KGC:integrated semantics-structure analysis in knowledge graph completion[J].IEEE Access,2024,12:57250-57260.
[1] LIU Leyuan, CHEN Gege, WU Wei, WANG Yong, ZHOU Fan. Survey of Data Classification and Grading Studies [J]. Computer Science, 2025, 52(9): 195-211.
[2] ZHONG Boyang, RUAN Tong, ZHANG Weiyan, LIU Jingping. Collaboration of Large and Small Language Models with Iterative Reflection Framework for Clinical Note Summarization [J]. Computer Science, 2025, 52(9): 294-302.
[3] CHENG Zhangtao, HUANG Haoran, XUE He, LIU Leyuan, ZHONG Ting, ZHOU Fan. Event Causality Identification Model Based on Prompt Learning and Hypergraph [J]. Computer Science, 2025, 52(9): 303-312.
[4] WANG Limei, HAN Linrui, DU Zuwei, ZHENG Ri, SHI Jianzhong, LIU Yiqun. Privacy Policy Compliance Detection Method for Mobile Application Based on Large LanguageModel [J]. Computer Science, 2025, 52(8): 1-16.
[5] WANG Dongsheng. Multi-defendant Legal Judgment Prediction with Multi-turn LLM and Criminal Knowledge Graph [J]. Computer Science, 2025, 52(8): 308-316.
[6] HONG Xinran, MA Jun, WANG Jing, ZHANG Chuang, YU Jie, LI Xiaoling, ZHANG Xueyan, YANG Yajing. Survey on Research of Compatibility Issues in Operating System for Software Ecology Evolution [J]. Computer Science, 2025, 52(7): 1-12.
[7] LUO Xuyang, TAN Zhiyi. Knowledge-aware Graph Refinement Network for Recommendation [J]. Computer Science, 2025, 52(7): 103-109.
[8] HAO Jiahui, WAN Yuan, ZHANG Yuhang. Research on Node Learning of Graph Neural Networks Fusing Positional and StructuralInformation [J]. Computer Science, 2025, 52(7): 110-118.
[9] LI Maolin, LIN Jiajie, YANG Zhenguo. Confidence-guided Prompt Learning for Multimodal Aspect-level Sentiment Analysis [J]. Computer Science, 2025, 52(7): 241-247.
[10] CHEN Jinyin, XI Changkun, ZHENG Haibin, GAO Ming, ZHANG Tianxin. Survey of Security Research on Multimodal Large Language Models [J]. Computer Science, 2025, 52(7): 315-341.
[11] TU Ji, XIAO Wendong, TU Wenji, LI Lijian. Application of Large Language Models in Medical Education:Current Situation,Challenges and Future [J]. Computer Science, 2025, 52(6A): 240400121-6.
[12] LI Bo, MO Xian. Application of Large Language Models in Recommendation System [J]. Computer Science, 2025, 52(6A): 240400097-7.
[13] ZOU Rui, YANG Jian, ZHANG Kai. Low-resource Vietnamese Speech Synthesis Based on Phoneme Large Language Model andDiffusion Model [J]. Computer Science, 2025, 52(6A): 240700138-6.
[14] ZHOU Lei, SHI Huaifeng, YANG Kai, WANG Rui, LIU Chaofan. Intelligent Prediction of Network Traffic Based on Large Language Model [J]. Computer Science, 2025, 52(6A): 241100058-7.
[15] BAI Yuntian, HAO Wenning, JIN Dawei. Study on Open-domain Question Answering Methods Based on Retrieval-augmented Generation [J]. Computer Science, 2025, 52(6A): 240800141-7.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!