Computer Science ›› 2025, Vol. 52 ›› Issue (11A): 250100011-4.doi: 10.11896/jsjkx.250100011

• Artificial Intelligence • Previous Articles     Next Articles

Research on Retrieval-augmented Generation Technology Combining Graph Retrieval and Contextual Ranking

XUE Xiaonan   

  1. Beijing Youth Political College,Beijing 100102,China
  • Online:2025-11-15 Published:2025-11-10
  • About author:XUE Xiaonan,born in 1989.His main research interests include AI and cybersecurity.

Abstract: Complex question-answering tasks require models to efficiently retrieve relevant information from large-scale heterogeneous knowledge sources while supporting the generation of high-quality answers.However,existing retrieval-augmented generation methods face numerous challenges in knowledge retrieval,semantic relevance,and generation consistency:(1) the granularity and structured information of the knowledge retrieval module are insufficient;(2) there is a lack of contextual relevance in retrieval,limited ranking capability,and constrained generation quality;(3) generative models struggle to accurately integrate retrieved knowledge and produce contextually consistent answers.This paper proposes a novel framework,GraphRank-RAG,which combines graph-based retrieval-augmented generation with contextual ranking to address the issues mentioned.By introducing a graph-based retrieval mechanism,the framework captures deep semantic relationships within contexts,optimizing both the ranking process and answer generation.Experimental results demonstrate that the proposed method outperforms existing approaches on multiple open-domain question-answering datasets,achieving significant improvements in retrieval accuracy and generation quality.

Key words: Large language model, Retrieval augmented generation, Graph retrieval, Contextual rank, Retrieval technology

CLC Number: 

  • TP181
[1]LEWIS P,PEREZ E,PIKTUS A,et al.Retrieval-AugmentedGeneration for Knowledge-Intensive NLP Tasks[J].arXiv:2005.11401,2020.
[2]TONMOY S M T I,ZAMAN S M M,JAIN V,et al.A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models[J].arXiv:2401.01313,2024.
[3]KASAI J,SAKAGUCHI K,YOICHI T,et al.Realtime QA:What’s the answer right now?[C]//NeurIPS.2023.
[4]XU P,PING W,WU X C,et al.Retrieval meets long context large language models[C]//Proceedings of the International Conference on Learning Representations(ICLR).2024.
[5]ROBERTSON S,ZARAGOZA H.The probabilistic relevanceframework:BM25 and beyond[J].Foundations and Trends© in Information Retrieval,2009,3(4):333-389.
[6]KARPUKHIN V,OGˇUZ B,MIN S,et al.Dense passage retrievalfor open-domain question answering[J].arXiv:2004.04906,2020.
[7]XU F,SHI W,CHOI E.Recomp:Improving retrieval-augmented lms with compression and selective augmentation[J].arXiv:2310.04408,2023.
[8]MA X,ZHANG X,PRADEEP R,et al.Zero-shot listwise document reranking with a large language model[J].arXiv:2305.02156,2023.
[9]YU Y,PING W,LIU Z,et al.Rankrag:Unifying context ranking with retrieval-augmented generation in llms[J].arXiv:2407.02485,2024
[10]EDGE D,TRINH H,CHENG N,et al.From local to global:Agraph rag approach to query-focused summarization[J].arXiv:2404.16130,2024.
[1] WANG Zhibin, LI Shipeng, ZHOU Yuhang, LI Xue, ZHANG Zhonghui, JIANG Zhiwei, GU Rong, TIAN Chen, CHEN Guihai, ZHONG Sheng. Optimization of Service Level Objectives and System Level Metrics in Large Language ModelServing System [J]. Computer Science, 2026, 53(3): 23-32.
[2] ZHOU Yueyuan, LU Guanze, XIANG Jiawei, ZHANG Jiawei, SHAO En, HE Xin. Training System for Large Language Models Based on Adaptive Transpose on Hygon DCU [J]. Computer Science, 2026, 53(3): 33-40.
[3] CHEN Han, XU Zefeng, JIANG Jiu, FAN Fan, ZHANG Junjian, HE Chu, WANG Wenwei. Large Language Model and Deep Network Based Cognitive Assessment Automatic Diagnosis [J]. Computer Science, 2026, 53(3): 41-51.
[4] WU Xianjie, LI Tongliang, LI Zhoujun. Survey of Table Question Answering Research [J]. Computer Science, 2026, 53(3): 295-306.
[5] XU Cheng, LIU Yuxuan, WANG Xin, ZHANG Cheng, YAO Dengfeng, YUAN Jiazheng. Review of Speech Disorder Assessment Methods Driven by Large Language Models [J]. Computer Science, 2026, 53(3): 307-320.
[6] LI Wenli, FENG Xiaonian, QIAN Tieyun. Few-shot Continuous Toxicity Detection Based on Large Language Model Augmentation [J]. Computer Science, 2026, 53(3): 321-330.
[7] CHEN Yuyin, LI Guanfeng, QIN Jing, XIAO Yuhang. Survey on Complex Logical Query Methods in Knowledge Graphs [J]. Computer Science, 2026, 53(2): 273-288.
[8] GUO Luxiang, WANG Yueyu, LI Qianyue, LI Shasha, LIU Xiaodong, JI Bin, YU Jie. Comprehensive Survey of LLM-based Agent Operating Systems [J]. Computer Science, 2026, 53(1): 1-11.
[9] LIU Lilong, LIU Guoming, QI Baoyuan, DENG Xueshan, XUE Dizhan, QIAN Shengsheng. Efficient Inference Techniques of Large Models in Real-world Applications:A Comprehensive Survey [J]. Computer Science, 2026, 53(1): 12-28.
[10] SHAO Xinyi, ZHU Jingwei, ZHANG Liang. LLM-based Business Process Adaptation Method to Respond Long-tailed Changes [J]. Computer Science, 2026, 53(1): 29-38.
[11] LIU Leyuan, CHEN Gege, WU Wei, WANG Yong, ZHOU Fan. Survey of Data Classification and Grading Studies [J]. Computer Science, 2025, 52(9): 195-211.
[12] CAI Qihang, XU Bin, DONG Xiaodi. Knowledge Graph Completion Model Using Semantically Enhanced Prompts and Structural Information [J]. Computer Science, 2025, 52(9): 282-293.
[13] ZHONG Boyang, RUAN Tong, ZHANG Weiyan, LIU Jingping. Collaboration of Large and Small Language Models with Iterative Reflection Framework for Clinical Note Summarization [J]. Computer Science, 2025, 52(9): 294-302.
[14] WANG Dongsheng. Multi-defendant Legal Judgment Prediction with Multi-turn LLM and Criminal Knowledge Graph [J]. Computer Science, 2025, 52(8): 308-316.
[15] WANG Limei, HAN Linrui, DU Zuwei, ZHENG Ri, SHI Jianzhong, LIU Yiqun. Privacy Policy Compliance Detection Method for Mobile Application Based on Large LanguageModel [J]. Computer Science, 2025, 52(8): 1-16.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!