Computer Science ›› 2025, Vol. 52 ›› Issue (6A): 240400097-7.doi: 10.11896/jsjkx.240400097

• Large Language Model Technology and Its Application • Previous Articles     Next Articles

Application of Large Language Models in Recommendation System

LI Bo, MO Xian   

  1. College of Information Engineering,Ningxia University,Yinchuan 750021,China
  • Online:2025-06-16 Published:2025-06-12
  • About author:LI Bo,born in 2003,undergraduate,is a member of CCF(No.T9206G).His main research interests include recommend dation algorithm.
    MO Xian,born in 1990,Ph.D,associate professor,master supervisor,is a member of CCF(No.R6178M).His main research interests include data mining,graph neural network,and recommender system.
  • Supported by:
    National Natural Science Foundation of China(62306157) and Science and Technology Leading Talent Training Projects of Ningxia(2022GKLRLX03).

Abstract: Large language models(LLMs) play a key role in recommendation system(RS),such as feature engineering and feature encoding,pre-training and fine-tuning,and prompt learning.Through feature engineering and feature encoding,LLMs improve the personalization and accuracy of the recommendation system,and optimizes the generalization ability and adaptability of the model.Studies show that LLMs can enrich user profiles and extract item features in the feature engineering stage.The pre-training and fine-tuning phases involve training on a large amount of unlabeled data to prepare for downstream task deployment.The prompt learning phase improves the model’s ability to understand and solve recommendation tasks by designing effective instructions and prompts.This paper also discusses the challenges of LLMs in the application of recommendation systems,such as high computational cost,API dependency,and data noise.Researchers are exploring various optimization strategies.The potential development of recommendation system in the future focuses on data enhancement,fine-tuning efficiency improvement,prompt design optimization,and interpretability enhancement.These comprehensive analyses provide a solid theoretical foundation for the continuous development and innovation in the field of recommendation system.

Key words: Recommendation system, Large language models, Feature engineering, Pre-training and fine-tuning, Prompt learning

CLC Number: 

  • TP183
[1]FAN W,MA Y,LI Q,et al.A graph neural network framework for social recommendations [J].IEEE Transactions on Know-ledge and Data Engineering,2020,34(5):2033-2047.
[2]CHEN X,FAN W,CHEN J,et al.Fairly adaptive negative sampling for recommendations[C]//Proceedings of the ACM Web Conference 2023.2023:3723-3733.
[3]ZHANG S,YAO L,SUN A,et al.Deep learning based recommender system:A survey and new perspectives [J].ACM computing surveys,2019,52(1):1-38.
[4]BROWN T,MANN B,RYDER N,et al.Language models are few-shot learners [J].Advances in neural information processing systems,2020,33:1877-901.
[5]LEE J,TOUTANOVA K J A P A.Pre-training of deep bidirectional transformers for language understanding [J].arXiv:1810.04805,2018,3(8).
[6]LIU C,FAN W,LIU Y,et al.Generative diffusion models on graphs:Methods and applications [J].arXiv:2302.02591,2023.
[7]GAO Y,SHENG T,XIANG Y,et al.Chat-rec:Towards interactive and explainable llms-augmented recommender system [J].arXiv:2303.14524,2023.
[8]CHEN J,MA L,LI X,et al.Knowledge graph completion models are few-shot learners:An empirical study of relation labeling in e-commerce with llms [J].arXiv:2305.09858,2023.
[9]SHAYEGANI E,MAMUN M A A,FU Y,et al.Survey of vulnerabilities in large language models revealed by adversarial attacks [J]. arXiv:2310.10844,2023.
[10]ZHAO H,LIU S,CHANG M,et al.Gimlet:A unified graph-text model for instruction-based molecule zero-shot learning [J].Advances in neural information processing systems,2024,36:5850-5887.
[11]ZHAO H,CHEN H,YANG F,et al.Explainability for large language models:A survey [J].ACM Transactions on Intelligent Systems and Technology,2024,15(2):1-38.
[12]LIU Q,CHEN N,SAKAI T,et al.A first look at llm-powered generative news recommendation [J].CoRR,2023.
[13]GUNARUWAN T,GUNASEKARA W N J N J O M.A Modular Framework for Extensible and Adaptable Recommendation Algorithms [J].NSBM Journal of Management,2007,2(1).
[14]WU C,WU F,QI T,et al.PTUM:Pre-training user model from unlabeled user behaviors via self-supervision [J].arXiv:2010.01494,2020.
[15]GENG S,LIU S,FU Z,et al.Recommendation as language processing(rlp):A unified pretrain,personalized prompt & predict paradigm(p5)[C]//Proceedings of the 16th ACM conference on recommender systems.2022:299-315.
[16]WANG W,LIN X,FENG F,et al.Generative recommendation:Towards next-generation recommender paradigm [J].arXiv:2304.03516,2023.
[17]LUO S,YAO Y,HE B,et al.Integrating large language models into recommendation via mutual augmentation and adaptive aggregation [J].arXiv:2401.13870,2024.
[18]XI Y,LIU W,LIN J,et al.Towards open-world recommendation with knowledge augmentation from large language models[C]//Proceedings of the 18th ACM Conference on Recommender Systems.2024:12-22.
[19]WU J,LIU Q,HU H,et al.Leveraging Large Language Models(LLMs) to Empower Training-Free Dataset Condensation for Content-Based Recommendation [J]. arXiv:2310.09874,2023.
[20]LI Y,MA S,WANG X,et al.Ecomgpt:Instruction-tuning large language models with chain-of-task tasks for e-commerce[C]//Proceedings of the AAAI Conference on Artificial Intelligence.2024,38(17):18582-18590.
[21]LIU Z,CHEN Z,ZHANG M,et al.Modeling User ViewingFlow using Large Language Models for Article Recommendation[C]//Companion Proceedings of the ACM Web Conference 2024.2024:83-92.
[22]GUAN F,KITAYAMA D.Review prediction using large-scale language models for serendipity-oriented tourist spot recommendation and its evaluation[C]//2024 18th International Conference on Ubiquitous Information Management and Communication(IMCOM).IEEE,2024:1-4.
[23]WEI J,WANG X,SCHUURMANS D,et al.Chain-of-thought prompting elicits reasoning in large language models [J].Advances in neural information processing systems,2022,35:24824-24837.
[24]BORISOV V,SEβLER K,LEEMANN T,et al.Language models are realistic tabular data generators [J].arXiv:2210.06280,2022.
[25]DING H,MA Y,DEORAS A,et al.Zero-shot recommender systems [J].arXiv:2105.08318,2021.
[26]HOU Y,HE Z,MCAULEY J,et al.Learning vector-quantizeditem representation for transferable sequential recommenders[C]//Proceedings of the ACM Web Conference 2023.2023:1162-1171.
[27]GONG Y,DING X,SU Y,et al.An unified search and recommendation foundation model for cold-start scenario[C]//Proceedings of the 32nd ACM International Conference on Information and Knowledge Management.2023:4595-4601.
[28]LI P,WANG Y,CHI E H,et al.Prompt tuning large language models on personalized aspect extraction for recommendations [J].arXiv:2306.01475,2023.
[29]RAJPUT S,MEHTA N,SINGH A,et al.Recommender systems with generative retrieval [J].Advances in Neural Information Processing Systems,2024,36:10299-10315.
[30]DEVLIN J,CHANG M W,LEE K,et al.Bert:Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies,Volume 1(long and short papers).2019:4171-4186.
[31]CUI Z,MA J,ZHOU C,et al.M6-rec:Generative pretrained language models are open-ended recommender systems [J].arXiv:2205.08084,2022.
[32]FRIEDMAN L,AHUJA S,ALLEN D,et al.Leveraging large language models in conversational recommender systems [J].arXiv:2305.07961,2023.
[33]SHEN T,LI J,BOUADJENEK M R,et al.Towards under-standing and mitigating unintended biases in language model-driven conversational recommendation [J].Information Processing & Management,2023,60(1):103139.
[34]KIM H,JEONG J,KIM K M,et al.Intent-based product collections for e-commerce using pretrained language models[C]//2021 International Conference on Data Mining Workshops(ICDMW).IEEE,2021:228-237.
[35]WANG J,YUAN F,CHENG M,et al.Transrec:Learningtransferable recommendation from mixture-of-modality feedback[C]//Asia-Pacific Web(APWeb) and Web-Age Information Management(WAIM) Joint International Conference on Web and Big Data.Singapore:Springer Nature Singapore,2024:193-208.
[36]LI X L,LIANG P J A P A.Prefix-tuning:Optimizing continuous prompts for generation [J].arXiv:2101.00190,2021.
[37]HU E J,SHEN Y,WALLIS P,et al.Lora:Low-rank adaptation of large language models [J].ICLR,2022,1(2):3.
[38]GAO T,FISCH A,CHEN D J A P A.Making pre-trained language models better few-shot learners [J].arXiv:2012.15723,2020.
[39]DONG Q,LI L,DAI D,et al.A survey on in-context learning [J].arXiv:2301.00234,2022.
[40]WANG X,ZHOU K,WEN J R,et al.Towards unified conversational recommender systems via knowledge-enhanced prompt learning[C]//Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining.2022:1929-1937.
[41]LIU J,LIU C,ZHOU P,et al.Is chatgpt a good recommender? a preliminary study [J].arXiv:2304.10149,2023.
[42]WANG L,ZHANG J,CHEN X,et al.Recagent:A novel simulation paradigm for recommender systems [J].arXiv:2306.02552,2023.
[1] HU Caishun. Study on Named Entity Recognition Algorithms in Audit Domain Based on Large LanguageModels [J]. Computer Science, 2025, 52(6A): 240700190-4.
[2] LUAN Fangjun, ZHANG Fengqiang, YUAN Shuai. Click-through Rate Prediction Model Based on Feature Embedding Gating and PolynomialFeature Crossover Networks [J]. Computer Science, 2025, 52(6A): 240900092-6.
[3] GAO Hongkui, MA Ruixiang, BAO Qihao, XIA Shaojie, QU Chongxiao. Research on Hybrid Retrieval-augmented Dual-tower Model [J]. Computer Science, 2025, 52(6): 324-329.
[4] WANG Xiaoyi, WANG Jiong, LIU Jie, ZHOU Jianshe. Study on Text Component Recognition of Narrative Texts Based on Prompt Learning [J]. Computer Science, 2025, 52(6): 330-335.
[5] WANG Tianyi, LIN Youfang, GONG Letian, CHEN Wei, GUO Shengnan, WAN Huaiyu. Check-in Trajectory and User Linking Based on Natural Language Augmentation [J]. Computer Science, 2025, 52(2): 99-106.
[6] DUN Jingbo, LI Zhuo. Survey on Transmission Optimization Technologies for Federated Large Language Model Training [J]. Computer Science, 2025, 52(1): 42-55.
[7] ZHENG Mingqi, CHEN Xiaohui, LIU Bing, ZHANG Bing, ZHANG Ran. Survey of Chain-of-Thought Generation and Enhancement Methods in Prompt Learning [J]. Computer Science, 2025, 52(1): 56-64.
[8] LI Tingting, WANG Qi, WANG Jiakang, XU Yongjun. SWARM-LLM:An Unmanned Swarm Task Planning System Based on Large Language Models [J]. Computer Science, 2025, 52(1): 72-79.
[9] CHENG Zhiyu, CHEN Xinglin, WANG Jing, ZHOU Zhongyuan, ZHANG Zhizheng. Retrieval-augmented Generative Intelligence Question Answering Technology Based on Knowledge Graph [J]. Computer Science, 2025, 52(1): 87-93.
[10] ZHAO Chenyang, LIU Lei, JIANG He. Feature Construction for Effort-aware Just-In-Time Software Defect Prediction Based on Multi-objective Optimization [J]. Computer Science, 2025, 52(1): 232-241.
[11] MO Shuyuan, MENG Zuqiang. Multimodal Sentiment Analysis Model Based on Visual Semantics and Prompt Learning [J]. Computer Science, 2024, 51(9): 250-257.
[12] LIU Yumeng, ZHAO Yijing, WANG Bicong, WANG Chao, ZHANG Baomin. Advances in SQL Intelligent Synthesis Technology [J]. Computer Science, 2024, 51(7): 40-48.
[13] BAI Yu, WANG Xinzhe. Study on Hypernymy Recognition Based on Combined Training of Attention Mechanism and Prompt Learning [J]. Computer Science, 2024, 51(6A): 230700226-5.
[14] LI Jinxia, BIAN Huaxing, WEN Fuguo, HU Tianmu, QIN Shihan, WU Han, MA Hui. Performance Risk Prediction of Power Grid Material Suppliers Based on XGBoost [J]. Computer Science, 2024, 51(6A): 230400115-9.
[15] HUANG Chungan, WANG Guiping, WU Bo, BAI Xin. Diversified Recommendation Based on Light Graph Convolution Networks and ImplicitFeedback Enhancement [J]. Computer Science, 2024, 51(6A): 230900038-11.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!