计算机科学 ›› 2025, Vol. 52 ›› Issue (1): 87-93.doi: 10.11896/jsjkx.240900064
成志宇1, 陈星霖2, 王菁3, 周中元4, 张志政5,6
CHENG Zhiyu1, CHEN Xinglin2, WANG Jing3, ZHOU Zhongyuan4, ZHANG Zhizheng5,6
摘要: 为实现军事情报问答,提出了一种基于知识图谱的检索增强生成框架。该框架通过问题分类、实体识别、实体链接、知识检索有效地获取了背景知识。同时考虑到情报问题多约束的特点,使用回答集编程在知识上通过约束限制减少知识数量或者直接获得答案。最后,使用大语言模型在精炼后的知识上对问题进行求解,以减少问题理解过程中的属性识别与链接。在MilRE数据集上的实验表明,所提框架能够提供基于知识图谱的增强知识检索功能,并具有较好的军事情报问题解答能力。
中图分类号:
[1]PAN S,LUO L,WANG Y,et al.Unifying large language mo-dels and knowledge graphs:A roadmap[J].arXiv.2306.08302,2023. [2]PENG H.Design and Implementation of Military KnowledgeQ&A System Based on Knowledge Graph [D].Beijing:Beijing University of Posts and Telecommunications,2022:25-45. [3]GUO A B.Research on Key Technologies and Systems of Intelligent Intelligence Q&A [D].Changsha:National University of Defense Technology,2021:8-11. [4]XU P K.Research on Key Technologies for Semantic Analysis of Questions in Military Knowledge Q&A [D].Changsha:National University of Defense Technology,2021:34-46. [5]FAN J J,MA H Q,LIU X L.Research on Military Knowledge Graph Q&A Intelligent Service for Open Source Intelligence in the Digital Intelligence Era [J/OL].Data Analysis and Know-ledge Discovery:1-15.[2024-06-30].http://kns.cnki.net/kcms/detail/10.1478.G2.20231026.1305.002.html. [6]LAN Y,HE G,JIANG J,et al.Complex knowledge base question answering:A survey[J].IEEE Transactions on Knowledge and Data Engineering,2022,35(11):11196-11215. [7]HUANG L,YU W,MA W,et al.A survey on hallucination in large language models:Principles,taxonomy,challenges,and open questions[J].arXiv:2311.05232,2023. [8]TONMOY S M,ZAMAN S M,JAIN V,et al.A comprehensive survey of hallucination mitigation techniques in large language models[J].(arXiv:2401.01313,2024. [9]SCHLAG I,SUKHBAATAR S,CELIKYILMAZ A,et al.Large language model programs[J].arXiv:2305.05364,2023. [10]LEWIS P,PEREZ E,PIKTUS A,et al.Retrieval-augmentedgeneration for knowledge-intensive nlp tasks[J].Advances in Neural Information Processing Systems,2020,33:9459-9474. [11]GAO Y,XIONG Y,GAO X,et al.Retrieval-augmented generation for large language models:A survey[J].arXiv:2312.10997,2023. [12]NAKANO R,HILTON J,BALAJI S,et al.Webgpt:Browser-assisted question-answering with human feedback[J].arXiv:2112.09332,2021. [13]ASAI A,WU Z,WANG Y,et al.Self-rag:Learning to retrieve,generate,and critique through self-reflection[J].arXiv:2310.11511,2023. [14]RACKAUCKAS Z.Rag-fusion:a new take on retrieval-augmented generation[J].arXiv:2402.03367,2024. [15]PAWAR S,TONMOY S M,ZAMAN S M,et al.The What,Why,and How of Context Length Extension Techniques in Large Language Models--A Detailed Survey[J].arXiv:2401.07872,2024. [16]WANG X,SALMANI M,OMIDI P,et al.Beyond the limits:A survey of techniques to extend the context length in large language models[J].arXiv:2402.02244,2024. [17]GEBSER M,KAMINSKI R,KAUFMANN B,et al.Answer set solving in practice[M].Morgan and Claypool Publishers,2022. [18]LIU W,ZHOU P,ZHAO Z,et al.K-bert:Enabling languagerepresentation with knowledge graph[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2020:2901-2908. [19]DEVLIN J,CHANG M W,LEE K,et al.Bert:Pre-training of deep bidirectional transformers for language understanding[J].arXiv:1810.04805,2018. [20]TEAM G L M,ZENG A,XU B,et al.ChatGLM:A Family of Large Language Models from GLM-130B to GLM-4 All Tools[J].arXiv:2406.12793,2024. |
|