Computer Science ›› 2025, Vol. 52 ›› Issue (1): 1-33.doi: 10.11896/jsjkx.240100109
• Technology Research and Application of Large Language Model • Previous Articles Next Articles
ZENG Zefan1,2,3, HU Xingchen1,2, CHENG Qing1,2, SI Yuehang1,2, LIU Zhong1,2
CLC Number:
[1]GEORGE M.WordNet:A Lexical Database for English [J].Communications of the ACM,1995,38(11):39-41. [2]AUER S,BIZER C,KOBILAROV G,et al.DBpedia:A Nucleus for a Web of Open Data[C]//Proceedings of the 6th International Semantic Web Conference,2nd Asian Semantic Web Conference.Berlin,German:Springer,2007:722-735. [3]SUCHANEK F M,KASNECI G,WEIKUM G.YAGO:A Large Ontology from Wikipedia and WordNet[J].Journal of Web Semantics,2008,6(3):203-217. [4]CARLSON A,BETTERIDGE J,KISIEL B,et al.Toward an architecture for never-ending language learning[C]//Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence(AAAI’10).Atlanta,USA:AAAI,2010:1306-1313. [5]WU W,LI H,WANG H et al.Probase:a probabilistic taxonomy for text understanding[C]//Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data.New York,USA:ACM,2012:481-492. [6]XU B,XU Y,LIANG J,et al.CN-DBpedia:A Never-EndingChinese Knowledge Extraction System[C]//Proceedings of 2017 the International Conference on Industrial,Engineering and Other Applications of Applied Intelligent Systems.Berlin,German:Springer,2017:428-438. [7]NIU X,SUN X,WANG H,et al.Zhishi.me-Weaving ChineseLinking Open Data[C]//Proceedings of 10th International Semantic Web Conference.Berlin,German:Springer,2011:23-27. [8]ZHOU S,DAI X,CHEN H,et al.Interactive Recommender System via Knowledge Graph-enhanced Reinforcement Learning[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval(SIGIR’20).New York,NY,USA:Association for Computing Machinery,2020:179-188. [9]CUI H,PENG T,FENG L,et al.Simple Question Answeringover Knowledge Graph Enhanced by Question Pattern Classification [J].Knowledge and Information Systems,2021,63(2):2741-2761. [10]MA Y,CROOK P,SARIKAYA R,et al.Knowledge Graph Inference for spoken dialog systems[C]//2015 IEEE International Conference on Acoustics,Speech and Signal Processing(ICASSP).South Brisbane,Australia:IEEE,2015:5346-5350. [11]VASWANI A,SHAZEER N,PARMAR N,et al.Attention isall you need[C]//Proceedings of the 31st International Confe-rence on Neural Information Processing Systems(NIPS’17).Red Hook,NY,USA:ACM,2017:6000-6010. [12]DEVLIN J,CHANG M,LEE K,et al.BERT:Pre-training ofDeep Bidirectional Transform- ers for Language Understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies,Minneapolis,Minnesota.ACL,2019:4171-4186. [13]RADFORD A,NARASIMHAN K,SALIMANS T,et al.Improving language understanding by generative pre-training [EB/OL].[2023-9-20] https://openai.com/blog/language-unsupervised. [14]LEWIS M,LIU Y,GOYALN,et al.BART:Denoising Sequence-to-Sequence Pre-training for Natural Language Generation,Translation,and Comprehension[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.ACL,2020:7871-7880. [15]WINATA G,MADOTTO A,LIN Z,et al.Language Models are Few-shot Multilingual Learners[C]//Proceedings of the 1st Workshop on Multilingual Representation Learning,Punta Cana,Dominican Republic.Association for Computational Linguistics,2021:1-15. [16]IntroducingChatGPT [EB/OL].[2023-09-20] http://openai.com/blog/chatgpt. [17]OpenAI,GPT-4 Technical Report [J].arXiv:2303.08774,2023. [18]PAN S,LUO L,WANG Y,et al.Unifying Large LanguageModels and Knowledge Graphs:A Roadmap [J].arXiv:2306.08302,2023. [19]PAN J,RAZNIEWSKI S,KALO J,et al.Large Language Mo-dels and Knowledge Graphs:Opportunities and Challenges [J].arXiv:2308.06374,2023. [20]LI J,SUN A,HAN J,LI C.A Survey on Deep Learning forNamed Entity Recognition [J].IEEE Transactions on Know-ledge and Data Engineering,2022,34(1):50-70. [21]WANG H,LU G,YIN J,et al.Relation Extraction:A Brief Survey on Deep Neural Network Based Methods[C]//Proceedings of the 2021 4th International Conference on Software Enginee-ring and Information Management(ICSIM’21).New York,NY,USA:Association for Computing Machinery,2021:220-228. [22]SHEN W,LI Y,LIU Y,et al.Entity Linking Meets Deep Lear-ning [J].Techniques and Solutions.IEEE Transactions on Knowledge and Data Engineering,2023,35(3):2556-2578. [23]ZHONG L,WU J,LI Q,et al.A Comprehensive Survey on Automatic Knowledge Graph Construction [J].arXiv:2302.05019,2023. [24]CAI H,ZHENG V,CHANG K.A Comprehensive Survey ofGraph Embedding:Problems,Techniques,and Applications [J].IEEE Transactions on Knowledge and Data Engineering,2018,30(9):1616-1637. [25]CAO J,FANG J,MENG Z,et al.Knowledge Graph Embedding:A Survey from the Perspective of Representation Spaces [J].arXiv:2211.03536,2022. [26]XU Y,ZHAO J,WANG Y,et al.Temporal Knowledge Graph Representation Learning [J].Chinese Journal of Computer Science,2022,49(9):162-171. [27]LIANG K,MENG L,LIU M,et al.Reasoning over Different Types of Knowledge Graphs:Static,Temporal and Multi-Modal [J].arXiv:2212.05767,2022. [28]XIA Y,LAN M,CHEN X,et al.Survey on explainable know-ledge graph reasoning methods [J].Chinese Journal of Network and Information Security,2022,8(5):1-25. [29]MA R,LI Z,CHEN Z,et al.Survey on Knowledge Graph Reasoning [J].Chinese Journal of Computer Science,2022,49(S1):74-85. [30]FENG H,DUAN L,ZHANG B.Overview on Knowledge Reasoning for Knowledge Graph [J].Chinese Journal of Computer Systems & Applications,2021,30(10):21-30. [31]HUANG X,ZHANG J,LI D,et al.Knowledge Graph Embedding Based Question Answering[C]//Proceedings of the 12th ACM International Conference on Web Search and Data Mining(WSDM’19).New York,NY,USA:ACM,2019:105-113. [32]SA R,LI Y,LIN M.Survey of Question Answering Based on Knowledge Graph Reasoning [J].Chinese Journal of Frontiers of Computer Science and Technology,2022,16(8):1727-1741. [33]ZHANG W,CHEN J,LI J,et al.Knowledge Graph Reasoning with Logics and Embeddings:Survey and Perspective [J].ar-Xiv:2202.07412. [34]BORDES A,USUNIER N,GARCIA-DURÁN A,et al.Translating embeddings for modeling multi-relational data[C]//Proceedings of the 26th International Conference on Neural Information Processing Systems.Red Hook,NY,USA:Curran Associates Inc.,2013:2787-2795. [35]DETTMERS T,MINERVINI P,STENETORP P,et al.Convolutional 2D knowledge graph embeddings[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence and 30th Innovative Applications of Artificial Intelligence Conference and 8th AAAI Symposium on Educational Advances in Artificial Intelligence.New Orleans,Louisiana,USA:AAAI,2018:1811-1818. [36]WANG Z,ZHANG J,FENG J,et al.Knowledge graph embedding by translating on hyperplanes[C]//Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence(AAAI’14).Quebec City,Canada:AAAI,2014:1112-1119. [37]YU C,WANG F,LIU Y,et al.Research on knowledge graphalignment model based on deep learning [J].Expert Systems with Applications,2021,186:115768. [38]SU M,SU H,ZHENG H,et al.Deep Learning For Knowledge Graph Completion With XLNET[C]//Proceedings of the 2021 5th International Conference on Deep Learning Technologies(ICDLT’21).New York,NY,USA:ACM,2021:13-19. [39]ZHU Z,GALKIN M,ZHANG Z,et al.Neural-Symbolic Models for Logical Queries on Knowledge Graphs [J].arXiv:2205.10128,2022. [40]ZHANG J,CHEN B,ZHANG L,et al.Neural,symbolic and neural-symbolic reasoning on knowledge graphs [J].AI Open,2021,2:14-35. [41]GUAN S,CHENG X.BAI L,et al.What is Event Knowledge Graph:A Survey [J].IEEE Transactions on Knowledge and Data Engineering,2015,35(7):7569-7589. [42]ZHU X.,LI Z,WANG X,et al.Multi-Modal Knowledge Graph Construction and Applic-ation:A Survey[J].IEEE Transactions on Knowledge and Data Engineering,2022,34(1):1-15. [43]FENSEL D,.SIMSEK U,ANGELE K,et al.Introduction:What Is a Knowledge Graph?[M]//Knowledge Graphs.Cham:Springer,2020:1-10. [44]PETERS M E,NEUMANN M,IYYER M,et al.Deep Contex-tualized Word Representations [C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.New Orleans,Louisiana:ACL,2018:2227-2237. [45]TANG Z,WANG B,YAO T.DPTDR:Deep Prompt Tuning for Dense Passage Retrieval[C]//Proceedings of the 29th International Conference on Computational Linguistics.Gyeongju,Republic of Korea:International Committee on Computational Linguistics.2022:1193-1202. [46]LIU X,JI K,FU Y,et al.P-Tuning:Prompt Tuning Can BeComparable to Fine-tuning Across Scales and Tasks[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.Dublin,Ireland:ACL,2022:61-68. [47]FU J,FENG L,ZHANG Q,et al.Larger-Context Tagging:When and Why Does It Work?[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.ACL,2021:1463-1475. [48]YU J,BOHNET B,POESIO M.Named Entity Recognition as Dependency Parsing[C]//Proceedings of the 58th Annual Mee-ting of the Association for Computational Linguistics.ACL,2020:6470-6476. [49]TAN C,QIU W,CHEN M,et al.Boundary Enhanced NeuralSpan Classification for Nested Named Entity Recognition[C]//Proceedings of the AAAI Conference on Artificial Intelligence,Palo Alto,California USA:AAAI,2020:9016-9023. [50]YAO F,TAN C,CHEN M,et al.Nested Named Entity Recognition with Partially-Observed TreeCRFs[C]//Proceedings of the AAAI Conference on Artificial Intelligence.Vancouver,Canada:AAAI,2021:12839-12847. [51]LOU C,YANG S,TU K.Nested Named Entity Recognition as Latent Lexicalized Constituency Parsing[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.Dublin,Ireland:ACL,2022:6183-6198. [52]YANG S,TU K.Bottom-Up Constituency Parsing and Nested Named Entity Recognition with Pointer Networks[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.Dublin,Ireland:ACL,2022:2403-2416. [53]LI F,LIN Z,ZHANG M,et al.A Span-Based Model for JointOverlapped and Discont-inuous Named Entity Recognition[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.ACL,2021:4814-4828. [54]YAN H,GUI T,DAI J,et al.A Unified Generative Framework for Various NER Subtasks[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.ACL,2021:5808-5822. [55]LUIGGI T,GUIGUE V,SOULIER L,et al.Dynamic NamedEntity Recognition[C]//Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing.New York,NY,USA:ACM,2023:890-897. [56]ONOE Y,DURRETT G.Learning to Denoise Distantly-Labeled Data for Entity Typing[C]//Proceedings of the 2019 Confe-rence of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.Minneapolis,Minnesota,USA:ACL,2019:2407-2417. [57]ONOE Y,BORATKO M,MCCALLUM A,et al.Modeling Fine-Grained Entity Types with Box Embeddings[C]//Proceedings of the 59th AnnualMeeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.ACL,2021:2051-2064. [58]DAI H,SONG Y,WANG H.Ultra-Fine Entity Typing withWeak Supervision from a Masked Language Model[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.ACL,2021:1790-1799. [59]LIU Q,LIN H,XIAO X,et al.Fine-grained Entity Typing via Label Reasoning[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.Online and Punta Cana,Dominican Republic:ACL,2021:4611-4622. [60]LI B,YIN W,CHEN M.Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference [J].Transactions of the Association for Computational Linguistics,2022,10:607-622. [61]PAN W,WEI W,ZHU F.Automatic Noisy Label Correction for Fine-Grained Entity Typing[C]//Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence.Messe Wien,Vienna,Austria:International Joint Conferences on Artificial Intelligence Organization,2022:4317-4323. [62]DING N,CHEN Y,HAN X,et al.Prompt-learning for Fine-grained Entity Typing[C]//Findings of the Association for Computational Linguistics:EMNLP 2022.Abu Dhabi,United Arab Emirates:ACL,2022:6888-6901. [63]LI S,JI H,HAN J.Open Relation and Event Type Discovery with Type Abstraction[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.Abu Dhabi,United Arab Emirates:ACL,2022:6864-6877. [64]KOMARLU T,JIANG M,WANG X,et al.OntoType:Ontology-Guided Zero-Shot Fine-Grained Entity Typing with Weak Supervision from Pre-Trained Language Models [J].arXiv:2305.12307,2023. [65]TANG X,ZHANG J,CHEN B,et al.BERTINT:a BERT-based interaction model for knowledge graph alignment[C]//Procee-dings of the Twenty-Ninth International Joint Conference on Artificial Intelligence.Yokohama,Japan:International Joint Conferences on Artificial Intelligence Organization.2020:3174-3180. [66]YANG H W,ZOU Y,SHI P,et al.Aligning Cross-Lingual Entities with Multi-Aspect Information[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing.Hong Kong,China:ACL,2019:4431-4441. [67]LIU Y,HUA W,XIN K,et al.TEA:Timeaware Entity Alignment in Knowledge Graphs[C]//Proceedings of the ACM Web Conference 2023.New York,NY,USA:ACM,2023:2591-2599. [68]ZHANG Z,HAN X,LIU Z,et al.ERNIE:Enhanced Language Representation with Informative Entities[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.Florence,Italy:ACL,2019:1441-1451. [69]ALT C,HÜBNER M,HENNIG L.Improving Relation Extraction by Pre-trained Language Representations[J].arXiv:1906.03088,2019. [70]SHI P,LIN J.Simple BERT Models for Relation Extraction and Semantic Role Labeling[J].arXiv:1904.05255,2019. [71]JOSHI M,CHEN D,LIU Y,et al.SpanBERT:Improving Pre-training by Representing and Predicting Spans[J].Transactions of the Association for Computational Linguistics,2020,8:64-77. [72]SOARES L B,FITZGERALD N,LING J,et al.Matching the Blanks:Distributional Similarity for Relation Learning[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.Florence,Italy:ACL,2019:2895-2905. [73]PARK S,KIM H.Improving Sentence-Level Relation Extraction through Curriculum Learning[J].arXiv:2107.09332,2021. [74]LYU S,CHEN H.Relation Classification with Entity Type Restriction[C]//Findings of the Association for Computational Linguistics.2021:390-395. [75]ZHENG J,CHEN Z.Sentence-Level Relation Extraction viaContrastive Learning with Descriptive Relation Prompts[J].arXiv:2304.04935,2023. [76]TANG H,CAO Y,ZHANG Z,et al.HIN:Hierarchical Infe-rence Network for Document-Level Relation Extraction[C]//Proceedings of Advances in Knowledge Discovery and Data Mining:24th Pacific-Asia Conference.Berlin,Germany:Springer-Verlag,2020:197-209. [77]WANG D,HU W,CAO E,et al.Global-to-Local Neural Networksfor Document-Level Relation Extraction[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing(EMNLP).ACL,2020:3711-3721. [78]ZENG S,XU R,CHANG B,et al.Double Graph Based Reaso-ning for Document-level Relation Extraction[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing(EMNLP).ACL,2020:1630-1640. [79]ZENG S,WU Y,CHANG B.SIRE:Separate Intra- and In-tersentential Reasoning for Document-level Relation Extraction[C]//Findings of the Association for Computational Linguistics.ACL,2021:524-534. [80]NAN G,GUO Z,SEKULIC I,et al.Reasoning with LatentStructure Refinement for Document-Level Relation Extraction[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.ACL,2020:1546-1557. [81]ZHANG N,CHEN X,XIE X,et al.Document-level Relation Extraction as Semantic Segmentation[C]//Proceedings of the 30th International Joint Conference on Artificial Intelligence.2021:3999-4006. [82]ZHOU W,HUANG K,MA T,et al.Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling[C]//Proceedings of the AAAI Conference on Artificial Intelligence.Vancouver,Canada:AAAI,2021:14612-14620. [83]JIANG Z,XU F,ARAKI J,et al.How Can We Know What Language Models Know? [J] Transactions of the Association for Computational Linguistics,2020,8:423-438. [84]HAN X,ZHAO W,DING N,et al.PTR:Prompt Tuning with Rules for Text Classification[J].AI Open,2022,3:182-192. [85]LI X L,LIANG P.Prefix-Tuning:Optimizing ContinuousPrompts for Generation[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.ACL,2021:4582-4597. [86]QIN G,EISNER J.Learning How to Ask:Querying LMs with Mixtures of Soft Prompts[C]//Proceedings of the 2021 Confe-rence of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.ACL,2021:5203-5212. [87]CHEN X,ZHANG N,XIE X,et al.KnowPrompt:Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction [C]//Proceedings of the ACM Web Conference 2022(WWW’22).New York,NY,USA:ACM,2022:2778-2788. [88]SHIN T,RAZEGHI Y,LOGAN IV R L,et al.AutoPrompt:Eliciting Knowledge from Language Models with Automatically Generated Prompts[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing(EMNLP).ACL,2020:4222-4235. [89]GUTIERREZ B J,MCNEAL N,WASHINGTON C,et al.Thinking about GPT-3 In-Context Learning for Biomedical IE? Think Again[C]//Findings of the Association for Computa-tional Linguistics:EMNLP 2022.Abu Dhabi,United Arab Emi-rates:ACL,2022:4497-4512. [90]MA Y,CAO Y,HONG Y,et al.Large language model is not a good few-shot information extractor,but a good reranker for hard samples[J].arXiv:2303.08559,2023. [91]WEI X,CUI X,CHENG N,et al.Zero-Shot Information Extraction via Chatting with ChatGPT[J].arXiv:2302.10205,2023. [92]SU Y,HAN X,ZHANG Z,et al.Coke-BERT:Contextualknowledge selection and embed-ding towards enhanced pre-trained language models[J].AI Open,2021,2:127-134. [93]YE D,LIN Y,DU J,et al.Coreferential Reasoning Learning for Language Representation[C]//Proceedings of the 2020 Confe-rence on Empirical Methods in Natural Language Processing(EMNLP).ACL,2020:7170-7186. [94]QIN Y,LIN Y,TAKANOBU R,et al.ERICA:Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.ACL,2021:3350-3363. [95]PETERS ME,NEUMANN M,LOGAN R,et al.Knowledge Enhanced Contextual Word Representations[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing.Hong Kong,China:ACL,2019:43-54. [96]HE Z,LIU S,LI M,et al.Learning Entity Representation forEntity Disambiguation[C]//Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics.Sofia,Bulgaria:ACL,2013:30-34. [97]JIA C,SHI Y,YANG Q,et al.Entity Enhanced BERT Pre-training for Chinese NER[C]//Proceedings of the 2020 Confe-rence on Empirical Methods in Natural Language Processing(EMNLP).ACL,2020:6384-6396. [98]BROSCHEIT S.Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking[C]//Proceedings of the 23rd Conference on Computational Natural Language Lear-ning(CoNLL).Hong Kong,China:ACL,2019:677-685. [99]WU L,PETRONI F,JOSIFOSKI M,et al.Scalable Zero-shot Entity Linking with Dense Entity Retrieval[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing(EMNLP).ACL,2020:6397-6407. [100]LI B Z,MIN S,IYER S,et al.Efficient One-Pass End-to-End Entity Linking for Questions[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Proces-sing(EMNLP).ACL,2020:6433-6441. [101]POERNER N,WALTINGER U,SCHTZE H.E-BERT:Efficient-Yet-Effective Entity Embeddings for BERT[C]//Findings of the Association for Computational Linguistics:EMNLP 2020.ACL,2020:803-818. [102]VERGA P,SUN H,BALDINI S L,et al.Adaptable and Interpretable Neural Memory Over Symbolic Knowledge[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.Online:ACL,2021:3678-3691. [103]YAMADA I,ASAI A,SHINDO H,et al.LUKE:Deep Contextualized Entity Representations with Entity-aware Self-attention[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing(EMNLP).ACL,2020:6442-6454. [104]CAO N,WU L,POPAT K,et al.Multilingual AutoregressiveEntity Linking[J].Transactions of the Association for Computational Linguistics,2022,10:274-290. [105]CAO N,IZACARD G,RIEDEL S,et al.Autoregressive Entity Retrieval[C]//Proceedings of the 8th International Conference on Learning Representations.2020. [106]AYOOLA T,TYAGI S,FISHER J,et al.ReFinED:An Efficient Zero-shot-capable Approach to End-to-End Entity Linking[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics.ACL,2022:209-220. [107]KUMAR A,PANDEY A,GADIA R,et al.Building Knowledge Graph using Pre-trained Language Model for Learning Entity-aware Relationships[C]//2020 IEEE International Conference on Computing,Power and Communication Technologies(GUCON).Greater Noida,India,2020:310-315. [108]MELNYK I,DOGNIN P,DAS P.Grapher:Multi-Stage Know-ledge Graph Construction using Pretrained Language Models[C]//NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications.2021. [109]HAN J,COLLIER N,BUNTINE W.PiVe:Prompting with Ite-rative Verification Improving Graph-based Generative Capability of LLMs[J].arXiv:2305.12392,2023. [110]YANG L,CHEN H,LI Z,et al.ChatGPT is not Enough:Enhancing Large Language Models with Knowledge Graphs for Fact-aware Language Modeling[J].arXiv:2306.11489,2023. [111]ZHU Y,WANG X,CHEN J,et al.LLMs for Knowledge Graph Construction and Reasoning:Recent Capabilities and Future Opportunities[J].arXiv:2305.13168,2023. [112]DENG X,SUN H,LEES A,et al.TURL:Table Understanding through Representation Learning[J].ACM SIGMOD Record,2022,51(1):33-40. [113]TANG N,FAN J,LI F,et al.RPT:Relational Pre-TrainedTransformer is Almost All You Need towards Democratizing Data Preparation[C]//Proc.VLDB Endow.2021:1254-1261. [114]FAN G,WANG J,LI Y,et al.Semantics-Aware Dataset Disco-very from Data Lakes with Contextualized Column-Based Representation Learning[C]//Proc.VLDB Endow.2023:1726-1739. [115]SUHARA Y,LI J,LI Y,et al.Annotating Columns with Pre-trained Language Models[C]//Proceedings of the 2022 International Conference on Management of Data(SIGMOD’22).New York,NY,USA:ACM,2022:1493-1503. [116]KORINI K,BIZER C.Column Type Annotation using ChatGPT[J].arXiv:2306.00745,2023. [117]KAZEMI M,MITTAL S,RAMACHAN-DRAN D.Under-standing Finetuning for Factual Knowledge Extraction from Language Models[J].arXiv:2301.11293,2023. [118]BOSSELUT A,RASHKIN H,SAP M,et al.COMET:Com-monsense Transformers for Automatic Knowledge Graph Construction[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.Florence,Italy:ACL,2019:4762-4779. [119]HAO S,TAN B,TANG K,et al.BertNet:Harvesting Know-ledge Graphs with Arbitrary Relations from Pretrained Language Models[C]//Findings of the Association for Computational Linguistics.Toronto,Canada:ACL,2023:5000-5015. [120]LIU Y,MAIER W,MINKER W,et al.ConceptNet infused DialoGPT for Underlying Commonsense Understanding and Reasoning in Dialogue Response Generation[J].arXiv:2209.15109,2022. [121]WEST P,BHAGAVATULA C,HESSEL J,et al.SymbolicKnowledge Distillation:from General Language Models to Commonsense Models[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.Seattle,United States:ACL,2022:4602-4625. [122]ZHANG Z,LIU X,ZHANG Y,et al.Pretrain-KGE:LearningKnowledge Representation from Pretrained Language Models[C]//Findings of the Association for Computational Linguistics:EMNLP 2020.ACL,2020:259-266. [123]WANG X,GAO T,ZHU Z,et al.KEPLER:A Unified Model for Knowledge Embedding and Pre-trained Language Representation[J].Transactions of the Association for Computational Linguistics.2021,9:176-194. [124]ALAM M,RONY M,NAYYERI M,et al.Language ModelGuided Knowledge Graph Embeddings[J].IEEE Access,2022,10:76008 -76020. [125]REIMERS N,GUREVYCH I.Sentence-BERT:Sentence Em-beddings using Siamese BERT-Networks[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing.Hong Kong,China:ACL,2019:3982-3992. [126]FastText:Library for efficient text classification and representation learning [EB/OL].[2023-09-21].https://fasttext.cc. [127]KANG M,BAEK J,HWANG S J.KALA:Knowledge-Augmented Language Model Adaptation[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.Seattle,United States:ACL,2022:5144-5167. [128]NAYYERI M,WANG Z,AKTER M M,et al.IntegratingKnowledge Graph embedding and pretrained Language Models in Hypercomplex Spaces[J].arXiv:2208.02743,2022. [129]LIU W,ZHOU P,ZHAO Z,et al.K-BERT:Enabling Language Representation with Knowledge Graph[C]//Proceedings of the AAAI Conference on Artificial Intelligence.AAAI,2020:2901-2908. [130]SUN T,SHAO Y,QIU X,et al.CoLAKE:Contextualized Language and Knowledge Embedding[C]//Proceedings of the 28th International Conference on Computational Linguistics.Barce-lona,Spain:International Committee on Computational Linguistics,2020:3660-3670. [131]WANG P,XIE X,WANG X,et al.Reasoning Through Memorization:Nearest Neighbor Knowledge Graph Embeddings[J].arXiv:2201.05575,2022. [132]WANG X,HE Q,LIANG J,et al.Language Models as Know-ledge Embeddings[C]//Proceedings of the 31st International Joint Conference on Artificial Intelligence.Vienna,Austria:International Joint Conferences on Artificial Intelligence Organization,2022:2291-2297. [133]BONDARENKO D A,FERROD R,DI CARO L.CombiningContrastive Learning and Knowledge Graph Embeddings to develop medical word embeddings for the Italian language[J].ar-Xiv:2211.05035. [134]WANG J,WANG C,QIU M,et al.KECP:Knowledge Enhanced Contrastive Prompting for Few-shot Extractive Question Answering[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.Abu Dhabi,United Arab Emirates:ACL,2022:3152-3163. [135]NASSIRI A K,PERNELLE N,SAIS F,et al.Knowledge Graph Refinement based on Triplet BERT-Networks[J].arXiv:2211.10460. [136]YAO L,MAO C,LUO Y.KG-BERT:BERT for KnowledgeGraph Completion[J].arXiv:1909.03193,2019. [137]KIM B,HONG T,KO Y,et al.Multi-Task Learning for Know-ledge Graph Completion with Pre-trained Language Models[C]//Proceedings of the 28th International Conference on Computational Linguistics,Barcelona,Spain:International Committee on Computational Linguistics.2020:1737-1743. [138]CHEN S,CHENG H,LIU X,et al.Pre-training Transformersfor Knowledge Graph Completion[J].arXiv:2303.15682,2023. [139]WANG L,ZHAO W,WEI Z,et al.SimKGC:Simple contrastive knowledge graph completion with pre-trained language models[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.Dublin,Ireland:ACL,2022:4281-4294. [140]LV X,LIN Y,CAO Y,et al.Do Pre-trained Models BenefitKnowledge Graph Completion? A Reliable Evaluation and a Reasonable Approach[C]//Proceedings of the Findings of the Association for Computational Linguistics.Dublin,Ireland:ACL,2022:3570-3581. [141]CHEN C,WANG Y,SUN A,et al.Dipping PLMs Sauce:Bri-dging Structure and Text for Effective Knowledge Graph Completion via Conditional Soft Prompting[C]//Proceedings of the Findings of the Association for Computational Linguistics.Toronto,Canada:ACL,2023:11489-11503. [142]CHOI B,JANG D,KO Y.MEM-KGC:Masked Entity Model for Knowledge Graph Completion With Pre-Trained Language Model[J].IEEE Access,2021,9:132025-132032. [143]CLOUATRE L,TREMPE P,ZOUAQ A,et al.MLMLM:Link Prediction with Mean Likelihood Masked Language Model[C]//Proceedings of the Findings of the Association for Computa-tional Linguistics.ACL,2021:4321-4331. [144]CHOI B,KO Y.Knowledge graph extension with a pre-trained language model via unified learning method.Knowledge-Based Systems[J].Knowledge-Based Systems,2023,262:110245. [145]PENG B,LIANG S,ISLAM M.Bi-Link:Bridging Inductive Link Predictions from Text via Contrastive Learning of Transformers and Prompts[J].arXiv:2210.14463,2022. [146]JIANG P,AGARWAL S,JIN B,et al.Text-Augmented Open Knowledge Graph Completion via Pre-Trained Language Models[J].arXiv:2305.15597,2023. [147]CHEN Z,XU C,SU F,et al.Incorporating Structured Sentences with Time-enhanced BERT forFully-inductive Temporal Relation Prediction[J].arXiv:2304.04717,2023. [148]XU W,LIU B,PENG M,et al.Pre-trained Language Modelwith Prompts for Temporal Knowledge Graph Completion[J].arXiv:2305.07912,2023. [149]LI D,YANG S,XU K,et al.Multi-task Pre-training Language Model for Semantic Network Completion[J].arXiv:2201.04843,2022. [150]SAXENA A,KOCHSIEK A,GEMULLA R.Sequence-to-Se-quence Knowledge Graph Completion and Question Answering[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.Dublin,Ireland:ACL,2022:2814-2828. [151]XIE X,ZHANG N,LI Z,et al.From Discrimination to Generation:Knowledge Graph Completion with Generative Transfor-mer[C]//Proceedings of the Web Conference 2022(WWW 22).New York,NY,USA:ACM,2022:162-165. [152]CHEN C,WANG Y,LI B,et al.Knowledge Is Flat:A Seq2Seq Generative Framework for Various Knowledge Graph Completion[C]//Proceedings of the 29th International Conference on Computational Linguistics,Gyeongju.Republic of Korea:International Committee on Computational Linguistics,2022:4005-4017. [153]ZHA H,CHEN Z,YAN X.Inductive Relation Prediction by BERT[C]//Proceedings of the AAAI Conference on Artificial Intelligence.Vancouver,Canada:AAAI,2022:5923-5931. [154]LOGAN R,LIU N F,PETERS M E,et al.Barack’s WifeHillary:Using Knowledge Graphs for Fact-Aware Language Modeling[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.Florence,Italy:ACL,2019:5962-5971. [155]LAN Y,HE S,LIU K,et al.Path-based knowledge reasoning with textual semantic information for medical knowledge graph completion[J].BMC Medical Informatics and Decision Making,2021,21(S9):1-12. [156]WANG B,SHEN T,LONG G,et al.Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion[C]//Proceedings of the Web Conference 2021(WWW’21).New York,NY,USA:ACM,2021:1737-1748. [157]SHEN J,WANG C,GONG L,et al.Joint Language Semanticand Structure Embedding for Knowledge Graph Completion[C]//Proceedings of the 29th International Conference on Computational Linguistics.Gyeongju,Republic of Korea:International Committee on Computational Linguistics,2022:1965-1978. [158]HE J,JIA L,WANG L,et al.MoCoSA:Momentum Contrast for Knowledge Graph Completion with Structure-Augmented Pre-trained Language Models[J].arXiv:2308.08204,2023. [159]LIN B Y,CHEN X,CHEN J,et al.KagNet:Knowledge-Aware Graph Networks for Commonsense Reasoning[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing.Hong Kong,China:ACL,2019:2829-2839. [160]LUKOVNIKOV D,FISCHER A,LEHMA-NN J.PretrainedTransformers for Simple Question Answering over Knowledge Graphs[C]//Proceedings of the 18th International Semantic Web Conference(ISWC 2019).Auckland,New Zealand:Sprin-ger,2019:470-486. [161]LUO D,SU J,YU S.A BERT-based Approach with Relation-aware Attention for Knowledge Base Question Answering[C]//Proceedings of the 2020 International Joint Conference on Neural Networks(IJCNN).Glasgow,UK:IEEE,2020:1-8. [162]FVRY T,SOARES LB,FITZGERALD N,et al.Entities as Experts:Sparse Memory Access with Entity Supervision[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing(EMNLP).ACL,2020:4937-4951. [163]LIU Y,LIANG D,FANG F,et al.Time-Aware Multiway Adaptive Fusion Network for Temporal Knowledge Graph Question Answering[C]//Proceedings of 2023 IEEE International Conference on Acoustics,Speech and Signal Processing(ICASSP).Rhodes Island,Greece:IEEE,2023:1-5. [164]PETRONI F,ROCKTÄSCHEL T,RIEDEL S,et al.Language Models as Knowledge Bases?[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Proces-sing and the 9th International Joint Conference on Natural Language Processing.Hong Kong,China:ACL,2019:2463-2473. [165]ZHANG J,ZHANG X,YU J,et al.Subgraph retrieval enhanced model for multi-hop knowledge base question answering[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.Dublin,Ireland:ACL,2022:5773-5784. [166]FENG Y,CHEN X,LIN B Y,et al.Scalable Multi-Hop Relatio-nal Reasoning for Knowledge-Aware Question Answering[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing(EMNLP).ACL,2020:1295-1309. [167]MISRA K,SANTOS C N,SHAKERI S.Triggering Multi-HopReasoning for Question Answering in Language Models using Soft Prompts and Random Walks[J].arXiv:2306.04009,2023. [168]XU Y,ZHU C,XU R,et al.Fusing Context Into KnowledgeGraph for Commonsense Question Answering[C]//Findings of the Association for Computational Linguistics.ACL,2021:1201-1207. [169]ZHANG M,DAI R,DONG M,et al.DRLK:Dynamic Hierarchical Reasoning with Language Model and Knowledge Graph for Question Answering[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.Abu Dhabi,United Arab Emirates:ACL,2022:5123-5133. [170]ZHANG Y,YAO Q.Knowledge Graph Reasoning with Relational Digraph[C]//Proceedings of the ACM Web Conference 2022(WWW ’22).New York,NY,USA:ACM,2022:912-924. [171]YAN Y,LI R,WANG S,et al.Large-Scale Relation Learning for Question Answering over Knowledge Bases with Pre-trained Language Models[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.ACL,2021:3653-3660. [172]JIANG J,ZHOU K,ZHAO X,et al.UniKGQA:Unified Retrieval and Reasoning for Solving Multi-hop Question Answe-ring Over Knowledge Graph[C]//Proceedings of the 11st International Conference on Learning Representations.Kigali,Rwanda,2023. [173]WANG R,TANG D,DUAN N,et al.K-Adapter:InfusingKnowledge into Pre-Trained Models with Adapters[C]//Fin-dings of the Association for Computational Linguistics.ACL,2021:1405-1418. [174]HU Z,XU Y,YU W,et al.Empowering Language Models with Knowledge Graph Reasoning for Open-Domain Question Answering[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing.Abu Dhabi,United Arab Emirate:ACL,2022:9562-9581. [175]BAEK J,AJI A F,SAFFARI A.Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering[C]//Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations(NLRSE).Toronto,Canada:ACL,2023:78-106. [176]JIANG J,ZHOU K,DONG Z,et al.StructGPT:A GeneralFramework for Large Language Model to Reason over Structured Data[J].arXiv:2305.09645,2023. [177]YASUNAGA M,REN H,BOSSELUT A,et al.QA-GNN:Reasoning with Language Models and Knowledge Graphs for Question Answering[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.ACL,2021:535-546. [178]SUN Y,SHI Q,QI L,et al.JointLK:Joint Reasoning with Language Models and Knowledge Graphs for Commonsense Question Answering[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics.United States,ACL,2022:5049-5060. [179]ZHANG X,BOSSELUT A,YASUNAGA M,et al.GreaseLM:Graph REASoning Enhanced Language Models for Question Answering[C]//Proceedings of the 10th International Confe-rence on Learning Representations.2022. [180]CAO X,LIU Y.ReLMKG:reasoning with pre-trained language models and knowledge graphs for complex question answering[J].Applied Intelligence,2023,53(10):12032-12046. [181]HU N,WU Y,QI G,et al.An empirical study of pre-trained language models in simple knowledge graph question answering[J].arXiv:2303.10368,2023. [182]YANG S,FENG D,QIAO L,et al.Exploring Pre-trained Language Models for Event Extraction and Generation[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.Florence,Italy:ACL,2019:5284-5294. [183]DENG S,ZHANG N,LI L,et al.OntoED:Low-resource Event Detection with Ontology Embedding[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.ACL,2021:2828-2839. [184]WADDEN D,WENNBERG U,LUAN Y,et al.Entity,Rela-tion,and Event Extraction with Contextualized Span Representations[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing.Hong Kong,China:ACL,2019:5784-5789. [185]LIU J,CHEN Y,LIU K,et al.Event Extraction as Machine Reading Comprehension[C]//Proceedings of the 2020 Confe-rence on Empirical Methods in Natural Language Processing(EMNLP).ACL,2020:1641-1651. [186]PAOLINI G,ATHIWARATKUN B,KRONE J,et al.Structured Prediction as Translation between Augmented Natural Languages[C]//Proceedings of the 9th International Confe-rence on Learning Representations.2021. [187]LIU X,HUANG H,SHI G,et al.Dynamic Prefix-Tuning for Generative Template-based Event Extraction[C]//Proceedings of the 60th Annual Meeting of the Association for Computa-tional Linguistics.Dublin,Ireland:ACL,2022:5216-5228. [188]LYU Q,ZHANG H,SULEM E,et al.Zero-shot Event Extraction via Transfer Learning:Challenges and Insights[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.ACL,2021:322-332. [189]LU Y,LIN H,XU J,et al.Text2Event:Controllable Sequence-to-Structure Generation for End-to-end Event Extraction[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.ACL,2021:2795-2806. [190]LI S,JI H,HAN J.Document-Level Event Argument Extraction by Conditional Generation[C]//Proceedings of the 2021 Confe-rence of the North American Chapter of the Association for Computational Linguistics.ACL,2021:894-908. [191]LU Y,LIU Q,DAI D.Unified Structure Generation for Universal Information Extraction[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.Dublin,Ireland:ACL,2022:5755-5772. [192]ZENG Q,ZHAN Q,JI H.EA2E:Improving Consistency withEvent Awareness for Document-Level Argument Extraction[C]//Findings of the Association for Computational Linguistics:NAACL 2022.Seattle,United States:ACL,2022:2649-2655. [193]MA Y,WANG Z,CAO Y.Prompt for Extraction? PAIE:Prompting Argument Interaction for Event Argument Extraction[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.Dublin,Ireland:ACL,2022:6759-6774. [194]DU X,LI S,JI H.Dynamic Global Memory for Document-level Argument Extraction[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.Dublin,Ireland:ACL,2022:5264-5275. [195]LIN J,CHEN Q.PoKE:A Prompt-based Knowledge Eliciting Approach for Event Argument Extraction[J].arXiv:2109.05190,2021. [196]XU R,WANG P,LIU T.A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction[C]//Procee-dings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics.Seattle,United States:ACL,2022:5025-5036. [197]WANG X,GUI L,HE Y.Document-Level Multi-Event Extraction with Event Proxy Nodes and Hausdorff Distance Minimization[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics.Toronto,Canada:ACL,2023:10118-10133. [198]YANG Y,GUO Q,HU X,et al.An AMR-based Link Prediction Approach for Document-level Event Argument Extraction[J].arXiv:2305.19162,2023. [199]SU B,HSU S,LAI K.Temporal Relation Extraction with a Graph-Based Deep Biaffine Attention Model[J].arXiv:2201.06125,2022. [200]WANG H,ZHANG H,DENG Y.Extracting or Guessing? Improving Faithfulness of Event Temporal Relation Extraction[C]//Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics.Dubrovnik,Croatia:ACL,202:541-553. [201]TAN X,PERGOLA G,HE Y.Event Temporal Relation Extraction with Bayesian Translational Model[J].arXiv:2302.04985,2023. [202]YAO H,BREITFELLER L,NAIK A.Multi-Scale Contrastive Co-Training for Event Temporal Relation Extraction[J].arXiv:2209.00568,2022. [203]HUANG Q,HU Y,ZHU S.More than Classification:A Unified Framework for Event Temporal Relation Extraction[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics.Toronto,Canada:ACL,2023:9631-9646. [204]YUAN C,XIE Q,ANANIADOU S.Zero-shot Temporal Relation Extraction with ChatGPT[J].arXiv:2304.05454,2023. [205]LI S,ZHAO R,LI M.Open-Domain Hierarchical Event Schema Induction by Incremental Promptingand Verification[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics.Toronto,Canada:ACL,2023:5677-5697. [206]AutoGPT:the heart of the open-source agent ecosystem [EB/OL].[2023-09-22] https://github.com/Significant-Gravitas/Auto-GPT. [207]LIU J,CHEN Y,ZHAO J.Knowledge enhanced event causality identification with mention masking generalizations[C]//Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence(IJCAI’20).ACL,2021:3608-3614. [208]HU Z,LI Z,JIN X.Semantic Structure Enhanced Event Causality Identification[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics.Toronto,Canada:ACL,2023:10901-10913. [209]XIANG W,ZHAN C,WANG B.DAPrompt:Deterministic Assumption Prompt Learning for Event Causality Identification[J].arXiv:2307.09813,2023. [210]GAO J,XIAO D,QIN B,et al.Is ChatGPT a Good Causal Reasoner? A Comprehensive Evaluation[C]//Findings of the Association for Computational Linguistics:EMNLP 2023,Singapore,ACL,2023:11111-11126 [211]ZUO X,CHEN Y,LIU K,et al.KnowDis:Knowledge Enhanced Data Augmentation for Event Causality Detection via Distant Supervision[C]//Proceedings of the 28th InternationalConfe-rence on Computational Linguistics.Barcelona,Spain:International Committee on Computational Linguistics,2020:1544-1550. [212]ZUO X,CAO P,CHEN Y,et al.LearnDA:Learnable Know-ledge-Guided Data Augmentation for Event Causality Identification[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.ACL,2021:3558-3571. [213]ZUO X,CAO P,CHEN Y,et al.Improving Event CausalityIdentification via Self-Supervised Representation Learning on External Causal Statement[C]//Findings of the Association for Computational Linguistics.ACL,2021:2162-2172. [214]CAO P,ZUO X,CHEN Y,et al.Knowledge-Enriched EventCausality Identification via Latent Structure Induction Networks[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.ACL,2021:4862-4872. [215]CHEN M,CAO Y,DENG K,et al.ERGO:Event RelationalGraph Transformer for Document-level Event Causality Identification[C]//Proceedings of the 29th International Conference on Computational Linguistics.Gyeongju.Republic of Korea:International Committee on Computational Linguistics,2022:2118-2128. [216]BARHOM S,SHWARTZ V,EIREW A,et al.Revisiting Joint Modeling of Cross-document Entity and Event Coreference Resolution[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.Florence,Italy:ACL,2019:4179-4189. [217]CHOUBEY P K,HUANG R.Event Coreference Resolution by Iteratively Unfold Inter-dependencies among Events[C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing.Copenhagen,Denmark:ACL,2017:2124-2133. [218]ZENG Y,JIN X,GUAN S,et al.Event Coreference Resolution with their Paraphrases and Argument-aware Embeddings[C]//Proceedings of the 28th International Conference on ComputationalLinguistics.Barcelona,Spain(Online):International Committee on Computational Linguistics,2020:3084-3094. [219]CACIULARU A,COHAN A,BELTAGY I,et al.CDLM:Cross-Document Language Modeling[C]//Findings of the Association for Computational Linguistics:EMNLP 2021.Punta Cana,Dominican Republic:ACL,2021:2648-2662. [220]LU Y,LIN H,TANG J,et al.End-to-end neural event corefe-rence resolution[J].Artificial Intelligence,2022,303:103632. [221]LU J,NG V.Span-Based Event Coreference Resolution[C]//Proceedings of the AAAI Conference on Artificial Intelligence,AAAI,2021:13489-13497. [222]LU J,NG V.Constrained Multi-Task Learning for Event Core-ference Resolution[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics.ACL,2021:4504-4514. [223]TRAN H M,PHUNG D,NGUYEN T H.Exploiting Document Structures and Cluster Consistencies for Event Coreference Re-solution[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.ACL,2021:4840-4850. [224]RAVI S,TANNER C,NG R,et al.What happens before and after:Multi-Event Commonsense in Event Coreference Resolution[C]//Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics.Dubrovnik,Croatia:ACL,2023:1708-1724. [225]HSU B,HORWOOD G.Contrastive Representation Learningfor Cross-Document Coreference Resolution of Events and Entities[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics.Seattle,United States:ACL,2022:3644-3655. [226]CHEN Z,ZHANG Y,FANG Y,et al.Knowledge Graphs Meet Multi-Modal Learning:A Comprehensive Survey[J].arXiv:2402.05391. [227]ZHU X,LI Z,WANG X,et al.Multi-Modal Knowledge Graph Construction and Application:A Survey[J].IEEE Transactions on Knowledge and Data Engineering,2024,36(2):715-735. [228]YU J,JIANG J,YANG L,et al.Improving multimodal named entity recognition via entity span detection with unified multimodal transformer[C]//Proceedings of the 58th Annual Mee-ting of the Association for Computational Linguistics.ACL,2020:3342-3352. [229]LU J,ZHANG D,ZHANG J,et al.Flat multi-modal interaction transformer for named entity recognition.[C]//Proceedings of the 29th International Conference on Computational Linguistics,Gyeongju,Republic of Korea.International Committee on Computational Linguistics.2022:2055-2064. [230]WANG P,CHEN X,SHANG Z,et al.Multimodal Named Entity Recognition with Bottleneck Fusion and Contrastive Learning[J].IEICE Transactions on Information and Systems,2023,106(4):545-555. [231]WANG X,GUI X,JIANG Y,et al.ITA:Image-Text Align-ments for Multi-Modal Named Entity Recognition[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies.Seattle,United States,ACL,2022:3176-3189. [232]XU B,HUANG S,SHA C,et al.MAF:A General Matching and Alignment Framework for Multimodal Named Entity Recognition[C]//Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining.New York,NY,USA:ACM,2022:1215-1223. [233]CHENG J,LONG K,ZHANG S,et al.Text-Image Scene Graph Fusion for Multi-Modal Named Entity Recognition[J].IEEE Transactions on Artificial Intelligence,2022,5(6):2828-2839. [234]GUO A,ZHAO X,TANZ,et al.MGICL:Multi-Grained Interaction Contrastive Learning for Multimodal Named Entity Recognition[C]//Proceedings of the 32nd ACM International Confe-rence on Information and Knowledge Management.Birmingham,United Kingdom:ACM,2023:639-648. [235]CHEN Z,CHEN J,ZHANG W,et al.MEAformer:Multi-modal Entity Alignment Transformer for Meta Modality Hybrid[J].arXiv:2212.14454,2022. [236]CHEN Z,GUO L,FANG Y,et al.2023.Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment[J].arXiv:2307.16210,2023. [237]LI X,YIN X,LI C,et al.Oscar:Object-Semantics Aligned Pre-training for Vision-Language Tasks[C]//Proceedings of the 16th European Conference on Computer Vision(ECCV).Glasgow,UK,Springer,2020:121-137. [238]PAN X,YE T,HAN D,et al.Contrastive Language-Image Pre-Training with Knowledge Graphs[C]//Proceedings of the 36th Conference on Neural Information Processing Systems(NeurIPS).New Orleans,Louisiana,USA:Morgan Kaufmann,2022:22895-22910. [239]CHEN X,ZHANG N,LI L,et al.Good Visual Guidance Make A Better Extractor:Hierarchical Visual Prefix for Multimodal Entity and Relation Extraction[C]//Findings of the Association for Computational Linguistics:NAACL 2022.Seattle,United States:ACL,2022:1607-1618. [240]LI Q,GUO S,JI C,et al.Dual-Gated Fusion with Prefix-Tuning for Multi-Modal Relation Extraction[C]//Findings of the Association for Computational Linguistics.Toronto,Canada:ACL 2023:8982-8994. [241]WANG X,CAI J,JIANG Y,et al.Named Entity and RelationExtraction with Multi-Modal Retrieval[C]//Findings of the Association for Computational Linguistics:EMNLP 2022.Abu Dhabi,United Arab Emirates.ACL,2022:5925-5936. [242]DU Z,LI Y,GUO X,et al.Training Multimedia Event Extraction with Generated Images and Captions[C]//Proceedings of the 31st ACM International Conference on Multimedia.New York,NY,USA:ACL,2023:5504-5513. [243]CHEN B,LIN X,THOMAS C,et al.Joint Multimedia Event Extraction from Video and Article[C]//Findings of the Asso-ciation for Computational Linguistics:EMNLP 2021.Punta Cana,Dominican Republic,ACL,2021:74-88. [244]LIU J,CHEN Y,XU J.Multimedia Event Extractionfrom News With a Unified Contrastive Learning Framework[C]//Procee-dings of the 30th ACM International Conference on Multimedia.New York,NY,USA:ACM,2022:1945-1953. [245]LI M,XU R,WANG S,et al.CLIP-Event:Connecting Text and Images with Event Structures[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).New Orleans,LA,USA:IEEE,2022:16399-16408. [246]MOGHIMIFAR F,SHIRI,F,HAFFARI R,et al.Few-shot Domain-Adaptative Visually-fused Event Detection from Text[C]//2023 26th International Conference on Information Fusion(FUSION).Charleston,SC,USA:IEEE,2023:1-8. [247]LI J,ZHANG C,DU M,et al.Three Stream Based Multi-level Event Contrastive Learning for Text-Video Event Extraction[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing,Singapore.ACL,2023:1666-1676. [248]LI H,YATSKAR M,YIN D,et al.VisualBERT:A Simple and Performant Baseline for Vision and Language[J].arXiv:1908.03557,2019. [249]LI H,YATSKAR M,YIN D,et al.Visual-BERT:A Simple and Performant Baseline for Vision and Language[J].arXiv:1908.03557,2019. [250]TAN H,BANSAL M.LXMERT:Learning Cross-Modality Encoder Representations from Transformers[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing(EMNLP-IJCNLP).Hong Kong,China:ACL,2019:5100-5111 [251]SU W,ZHU X,CAO Y,et al.VL-BERT:Pre-training of Gene-ric Visual-Linguistic Representations[C]//Proceedings of the 8th International Conference on Learning Representations.2020. [252]HUANG Z,ZENG Z,LIU B,et al.Pixel-BERT:Aligning ImagePixels with Text by Deep Multi-Modal Transformers[J].arXiv:2004.00849,2020. [253]LIN J,YANG A,ZHANG Y,et al.InterBERT:Vision-and-Language Interaction for Multi-modal Pretraining[J].arXiv:2003.13198,2020. [254]LI G,DUAN N,FANG Y,et al.Unicoder-VL:A Universal Encoder for Vision and Language by Cross-Modal Pre-Training[C]//Proceedings of the AAAI Conference on Artificial Intelligence.AAAI,2020:11336-11344. [255]HAN C,JIA H.Multi-modal Representation Learning with Self-adaptive Threshold for Commodity Verification[C]//Procee-dings of the China Conference on Knowledge Graph and Semantic Computing(CCKS).Singapore:Springer,2022:172-179. [256]YU F,TANG J,YIN W,et al.ERNIE-ViL:Knowledge En-hanced Vision-Language Representations through Scene Graphs[C]//Proceedings of the AAAI Conference on Artificial Intelligence.AAAI,2021:3208-3216. [257]CUI Y,YU Z,WANG C,et al.ROSITA:Enhancing Vision-and-Language Semantic Alignments via Cross-and Intra-modal Knowledge Integration[J].arXiv:2108.07073,2021. [258]LI W,GAO C,NIU G,et al.UNIMO:Towards Unified-ModalUnderstanding and Generation via Cross-Modal Contrastive Learning[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing.ACL,2021:2592-2607. [259]SUN Z,HUANG J,LIN J,et al.Joint Pre-training and Local Re-training:Transferable Representation Learning on Multi-source Knowledge Graphs[J].arXiv:2306.02679,2023. [260]HUANG N,DESHPANDE Y R,LIU Y,et al.Endowing Lan-guage Models with Multimodal Knowledge Graph Representations[J].arXiv:2206.13163,2022. [261]ALBERTS H,HUANG N,DESHPANDE Y,et al.VisualSem:A High-Quality Knowledge Graph for Vision and Language[C]//Proceedings of the 1st Workshop on Multilingual Representation Learning.Punta Cana,Dominican Republic:ACL,2021:138-152. [262]PESARANGHADER A,SAJED T.RECipe:Does a Multi-Modal Recipe Knowledge Graph Fit a Multi-Purpose Recommendation System[J].arXiv:2308.04579,2023. [263]SONG K,TAN X,QIN T,et al.MPNet:Masked and Permuted Pre-training for Language Understanding[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems(NIPS’20).Red Hook,NY,USA:Curran Associates Inc,2020:16857-16867. [264]DAY K,CHRISTL D,SALVI R,et al.Video Pre-trained Transformer:A Multimodal Mixture of Pre-trained Experts[J].ar-Xiv:2304.10505,2023. [265]ZHANG N,LI L,CHEN X,et al.Multimodal Analogical Rea-soning over Knowledge Graphs[C]//Proceedings of the 11th International Conference on Learning Representations.Kigali Rwanda,2023. [266]CHEN X,ZHANG N,LI L,et al.Hybrid Transformer withMulti-level Fusion for Multimodal Knowledge Graph Completion[C]//Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval(SIGIR’22).New York,NY,United States:ACM,2022:904-915. [267]YU J,ZHU Z,WANG Y,et al.Cross-modal knowledge reaso-ning for knowledge-based visual question answering[J].Pattern Recognition,2020,108,Article No.107563. [268]LIANG K,ZHOU S,LIU Y,et al.Structure Guided Multi-modal Pre-trained Transformer for Knowledge Graph Reasoning[J].arXiv:2307.03591,2023. [269]ZHANG Y,ZHANG W.Knowledge Graph Completion withPre-trained Multimodal Transformer and Twins Negative Sampling[J].arXiv:2209.07084,2022. [270]ZHANG Y,CHEN Z,ZHANG W.MACO:A Modality Adversarial and Contrastive Framework for Modality-missing Multi-modal Knowledge Graph Completion[J].arXiv:2308.06696,2023. [271]PENG B,GALLEY M,HE P,et al.Check Your Facts and Try Again:Improving Large Language Models with External Know-ledge and Automated Feedback[J].arXiv:2302.12813,2023. [272]BRUNO A,MAZZEO P,CHETOUANI A,et al.Insights into Classifying and Mitigating LLMs’ Hallucinations[J].arXiv:2311.08117,2023. [273]KANDULA V,BHATTACHARYYA P.Decision KnowledgeGraphs:Construction of and Usage in Question Answering for Clinical Practice Guidelines[J].arXiv:2308.02984,2023. [274]LYU K,TIAN Y,SONG Y,et al.Causal knowledge graph construction and evaluation for clinical decision support of diabetic nephropathy[J].Journal of Biomedical Informatics,2023,139(C),Article No.104298. [275]JAIMINI U,SHETH A.CausalKG:Causal Knowledge Graph Explainability Using Interventional and Counterfactual Reaso-ning[J].IEEE Internet Computing,2022,26(1):43-50. [276]HUANG H.Causal Relationship over Knowledge Graphs[C]//Proceedings of the 31st ACM International Conference on Information & Knowledge Management.Atlanta,GA,USA:ACL,2022:5116-5119. [277]ZHU W,LIU H,DONG Q,et al.Multilingual Machine Translation with Large Language Models:Empirical Results and Analysis[J].arXiv:2304.04675,2023. [278]JIN B,LIU G,HAN C,et al.Large Language Models onGraphs:A Comprehensive Survey[J].arXiv:2312.02783,2023. [279]WU L,QIU Z,ZHENG Z,et al.Exploring Large LanguageModel for Graph Data Understanding in Online Job Recommendations[J].arXiv:2307.05722,2023. [280]SUN J,XU C,TANG L,et al.Think-on-Graph:Deep and Responsible Reasoning of Large Language Model on Knowledge Graph[C]//Proceedings of the 12th International Conference on Learning Representations.Vienna,Austria,2024. [281]WEN Y,WANG Z,SUN J.MindMap:Knowledge GraphPrompting Sparks Graph of Thoughts in Large Language Mo-dels[J].arXiv:2308.07929,2023. [282]CHEN M,TAO Z,TANG W.Enhancing Emergency Decision-making with Knowledge Graphs and Large Language Models[J].arXiv:2311.08732,2023. [283]GAO J,DING X,QIN B,et al.Is ChatGPT a Good Causal Reasoner? A Comprehensive Evaluation[C]//Findings of the Association for Computational Linguistics(EMNLP 2023).Singapore:ACL,2023:11111-11126. [284]YUAN C,XIE Q,ANANIADOU S.Zero-shot Temporal Relation Extraction with ChatGPT[C]//The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks.Toronto,Canada:ACL,2023:92-102. [285]CAFFAGNI D,COCCHI F,BARSELLOTTI L,et al.The Revolution of Multimodal Large Language Models:A Survey[J].arXiv:2402.12451,2024. |
[1] | DUN Jingbo, LI Zhuo. Survey on Transmission Optimization Technologies for Federated Large Language Model Training [J]. Computer Science, 2025, 52(1): 42-55. |
[2] | ZHENG Mingqi, CHEN Xiaohui, LIU Bing, ZHANG Bing, ZHANG Ran. Survey of Chain-of-Thought Generation and Enhancement Methods in Prompt Learning [J]. Computer Science, 2025, 52(1): 56-64. |
[3] | LI Jiahui, ZHANG Mengmeng, CHEN Honghui. Large Language Models Driven Framework for Multi-agent Military Requirement Generation [J]. Computer Science, 2025, 52(1): 65-71. |
[4] | LI Tingting, WANG Qi, WANG Jiakang, XU Yongjun. SWARM-LLM:An Unmanned Swarm Task Planning System Based on Large Language Models [J]. Computer Science, 2025, 52(1): 72-79. |
[5] | YAN Yusong, ZHOU Yuan, WANG Cong, KONG Shengqi, WANG Quan, LI Minne, WANG Zhiyuan. COA Generation Based on Pre-trained Large Language Models [J]. Computer Science, 2025, 52(1): 80-86. |
[6] | CHENG Zhiyu, CHEN Xinglin, WANG Jing, ZHOU Zhongyuan, ZHANG Zhizheng. Retrieval-augmented Generative Intelligence Question Answering Technology Based on Knowledge Graph [J]. Computer Science, 2025, 52(1): 87-93. |
[7] | LIU Changcheng, SANG Lei, LI Wei, ZHANG Yiwen. Large Language Model Driven Multi-relational Knowledge Graph Completion Method [J]. Computer Science, 2025, 52(1): 94-101. |
[8] | CHENG Jinfeng, JIANG Zongli. Dialogue Generation Model Integrating Emotional and Commonsense Knowledge [J]. Computer Science, 2025, 52(1): 307-314. |
[9] | ZHAO Qian, GUO Bin, LIU Yubo, SUN Zhuo, WANG Hao, CHEN Mengqi. Generation of Enrich Semantic Video Dialogue Based on Hierarchical Visual Attention [J]. Computer Science, 2025, 52(1): 315-322. |
[10] | NIU Guanglin, LIN Zhen. Survey of Knowledge Graph Representation Learning for Relation Feature Modeling [J]. Computer Science, 2024, 51(9): 182-195. |
[11] | HUANG Xiaofei, GUO Weibin. Multi-modal Fusion Method Based on Dual Encoders [J]. Computer Science, 2024, 51(9): 207-213. |
[12] | DAI Chaofan, DING Huahua. Domain-adaptive Entity Resolution Algorithm Based on Semi-supervised Learning [J]. Computer Science, 2024, 51(9): 214-222. |
[13] | MO Shuyuan, MENG Zuqiang. Multimodal Sentiment Analysis Model Based on Visual Semantics and Prompt Learning [J]. Computer Science, 2024, 51(9): 250-257. |
[14] | ZHANG Junsan, CHENG Ming, SHEN Xiuxuan, LIU Yuxue, WANG Leiquan. Diversified Label Matrix Based Medical Image Report Generation [J]. Computer Science, 2024, 51(8): 200-208. |
[15] | CHEN Shanshan, YAO Subin. Study on Recommendation Algorithms Based on Knowledge Graph and Neighbor PerceptionAttention Mechanism [J]. Computer Science, 2024, 51(8): 313-323. |
|