Computer Science ›› 2025, Vol. 52 ›› Issue (11A): 250100011-4.doi: 10.11896/jsjkx.250100011

• Artificial Intelligence • Previous Articles     Next Articles

Research on Retrieval-augmented Generation Technology Combining Graph Retrieval and Contextual Ranking

XUE Xiaonan   

  1. Beijing Youth Political College,Beijing 100102,China
  • Online:2025-11-15 Published:2025-11-10

Abstract: Complex question-answering tasks require models to efficiently retrieve relevant information from large-scale heterogeneous knowledge sources while supporting the generation of high-quality answers.However,existing retrieval-augmented generation methods face numerous challenges in knowledge retrieval,semantic relevance,and generation consistency:(1) the granularity and structured information of the knowledge retrieval module are insufficient;(2) there is a lack of contextual relevance in retrieval,limited ranking capability,and constrained generation quality;(3) generative models struggle to accurately integrate retrieved knowledge and produce contextually consistent answers.This paper proposes a novel framework,GraphRank-RAG,which combines graph-based retrieval-augmented generation with contextual ranking to address the issues mentioned.By introducing a graph-based retrieval mechanism,the framework captures deep semantic relationships within contexts,optimizing both the ranking process and answer generation.Experimental results demonstrate that the proposed method outperforms existing approaches on multiple open-domain question-answering datasets,achieving significant improvements in retrieval accuracy and generation quality.

Key words: Large language model, Retrieval augmented generation, Graph retrieval, Contextual rank, Retrieval technology

CLC Number: 

  • TP181
[1]LEWIS P,PEREZ E,PIKTUS A,et al.Retrieval-AugmentedGeneration for Knowledge-Intensive NLP Tasks[J].arXiv:2005.11401,2020.
[2]TONMOY S M T I,ZAMAN S M M,JAIN V,et al.A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models[J].arXiv:2401.01313,2024.
[3]KASAI J,SAKAGUCHI K,YOICHI T,et al.Realtime QA:What’s the answer right now?[C]//NeurIPS.2023.
[4]XU P,PING W,WU X C,et al.Retrieval meets long context large language models[C]//Proceedings of the International Conference on Learning Representations(ICLR).2024.
[5]ROBERTSON S,ZARAGOZA H.The probabilistic relevanceframework:BM25 and beyond[J].Foundations and Trends© in Information Retrieval,2009,3(4):333-389.
[6]KARPUKHIN V,OGˇUZ B,MIN S,et al.Dense passage retrievalfor open-domain question answering[J].arXiv:2004.04906,2020.
[7]XU F,SHI W,CHOI E.Recomp:Improving retrieval-augmented lms with compression and selective augmentation[J].arXiv:2310.04408,2023.
[8]MA X,ZHANG X,PRADEEP R,et al.Zero-shot listwise document reranking with a large language model[J].arXiv:2305.02156,2023.
[9]YU Y,PING W,LIU Z,et al.Rankrag:Unifying context ranking with retrieval-augmented generation in llms[J].arXiv:2407.02485,2024
[10]EDGE D,TRINH H,CHENG N,et al.From local to global:Agraph rag approach to query-focused summarization[J].arXiv:2404.16130,2024.
[1] LIU Leyuan, CHEN Gege, WU Wei, WANG Yong, ZHOU Fan. Survey of Data Classification and Grading Studies [J]. Computer Science, 2025, 52(9): 195-211.
[2] CAI Qihang, XU Bin, DONG Xiaodi. Knowledge Graph Completion Model Using Semantically Enhanced Prompts and Structural Information [J]. Computer Science, 2025, 52(9): 282-293.
[3] ZHONG Boyang, RUAN Tong, ZHANG Weiyan, LIU Jingping. Collaboration of Large and Small Language Models with Iterative Reflection Framework for Clinical Note Summarization [J]. Computer Science, 2025, 52(9): 294-302.
[4] WANG Limei, HAN Linrui, DU Zuwei, ZHENG Ri, SHI Jianzhong, LIU Yiqun. Privacy Policy Compliance Detection Method for Mobile Application Based on Large LanguageModel [J]. Computer Science, 2025, 52(8): 1-16.
[5] WANG Dongsheng. Multi-defendant Legal Judgment Prediction with Multi-turn LLM and Criminal Knowledge Graph [J]. Computer Science, 2025, 52(8): 308-316.
[6] LI Maolin, LIN Jiajie, YANG Zhenguo. Confidence-guided Prompt Learning for Multimodal Aspect-level Sentiment Analysis [J]. Computer Science, 2025, 52(7): 241-247.
[7] CHEN Jinyin, XI Changkun, ZHENG Haibin, GAO Ming, ZHANG Tianxin. Survey of Security Research on Multimodal Large Language Models [J]. Computer Science, 2025, 52(7): 315-341.
[8] ZHAO Zheyu, WANG Zhongqing, WANG Hongling. Commodity Attribute Classification Method Based on Dual Pre-training [J]. Computer Science, 2025, 52(6A): 240500127-8.
[9] TU Ji, XIAO Wendong, TU Wenji, LI Lijian. Application of Large Language Models in Medical Education:Current Situation,Challenges and Future [J]. Computer Science, 2025, 52(6A): 240400121-6.
[10] LI Bo, MO Xian. Application of Large Language Models in Recommendation System [J]. Computer Science, 2025, 52(6A): 240400097-7.
[11] ZOU Rui, YANG Jian, ZHANG Kai. Low-resource Vietnamese Speech Synthesis Based on Phoneme Large Language Model andDiffusion Model [J]. Computer Science, 2025, 52(6A): 240700138-6.
[12] ZHOU Lei, SHI Huaifeng, YANG Kai, WANG Rui, LIU Chaofan. Intelligent Prediction of Network Traffic Based on Large Language Model [J]. Computer Science, 2025, 52(6A): 241100058-7.
[13] BAI Yuntian, HAO Wenning, JIN Dawei. Study on Open-domain Question Answering Methods Based on Retrieval-augmented Generation [J]. Computer Science, 2025, 52(6A): 240800141-7.
[14] ZHANG Le, CHE Chao, LIANG Yan. Hallucinations Proactive Relief in Diabetes Q&A LLM [J]. Computer Science, 2025, 52(6A): 240700182-10.
[15] YIN Baosheng, ZONG Chen. Research on Semantic Fusion of Chinese Polysemous Words Based on Large LanguageModel [J]. Computer Science, 2025, 52(6A): 240400139-7.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!