Computer Science ›› 2025, Vol. 52 ›› Issue (6A): 240400121-6.doi: 10.11896/jsjkx.240400121

• Large Language Model Technology and Its Application • Previous Articles     Next Articles

Application of Large Language Models in Medical Education:Current Situation,Challenges and Future

TU Ji1, XIAO Wendong2, TU Wenji3, LI Lijian4   

  1. 1 Institute of Basic Medical Sciences Chinese Academy of Medical Sciences,School of Basic Medicine Peking Union Medical College,Beijing 100005,China
    2 University of Science and Technology,Beijing 100083,China
    3 Dean’s Office,Peking Union Medical College,Beijing 100005,China
    4 Institute of Automation,Chinese Academy of Sciences,Beijing 100190,China
  • Online:2025-06-16 Published:2025-06-12
  • About author:TU Ji,born in 1986,senior engineer,is a senior member of CCF(31285M).His main research interests include the intersection of medicine and engineering,epidemiology and health statistics.
    TU Wenji,born in 1984,associate researcher.Her main research interests include medical education management and medical education evaluation.
  • Supported by:
    Beijing Municipal Commission of Education 2024 Academic Status Management Research Project(XJXL202416),Medical Artificial Intelligence Educational Reform Project of Institute of Basic Medical Sciences(2022jcjg0104),Ministry of Education 2022 Baidu Industry-academy Cooperation Collaborative Education Project-Medical Artificial Intelligence Curriculum Construction(182215PC08768) and China Computer Federation(CCF)-Baidu Pinecone Fund: Construction and Application Evaluation of a Multimodal Collaborative Healthcare Large Language Model Benchmark Dataset(CCF-BAIDUOF202418).

Abstract: Digitization of medical education is an inevitable trend in the development of medical education.By introducing the large language model of medical education,the limitation of traditional medical education can be broken.Students’ learning interest and participation can be improved as well as personalized practice of medical education.Individualized clinical practice teaching and scientific research training can be strengthened,which can improve teaching efficiency and effect.This paper reviews the development of large language model technology and the technical progress of medical large language model.It also lists the application scenarios of large language model in medical education and points out seven challenges of large language model in medical education.It is announced that the future development of medical education large language model is to develop an autonomous and controllable collaborative medical education large language model by using the technology route driven by knowledge and data.

Key words: Large language model, Medical education, Artificial intelligence, Digitization of education, ERNIE bot

CLC Number: 

  • TP18
[1]VAPNIK V.The nature of statistical learning theory[M].Springer Science & Business Media,1999.
[2]MIKOLOV T,KARAFIÁT M,BURGET L,et al.Recurrent neural network based language model[C]//Interspeech:volume 2.Makuhari,2010:1045-1048.
[3]VASWANI A,SHAZEER N,PARMAR N,et al.Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems.NY,USA:Curran Associates Inc.,2017:6000-6010.
[4]DEVLIN J,CHANG M W,LEE K,et al.Bert:Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies,Volume 1(Long and Short Papers).2019:4171-4186.
[5]ZHANG Q,GUI T,ZHENG R,et al.Large-Scale LanguageModels:From Theory to Practice[M].Publishing House of Electronics Industry,2024.
[6]SUTSKEVER I,VINYALS O,LE Q V.Sequence to sequence learning with neural networks[J].Advances in Neural Information Processing Systems,2014,4:3104-3112.
[7]RADFORD A,WU J,CHILD R,et al.Language models are unsupervised multitask learners[J].OpenAI blog,2019,1(8):9.
[8]ZHANG Q,GUI T,HUANG X J.Introduction to Natural Language Processing[M].Publishing House of Electronics Industry,2023.
[9]XIAO Y H,XU Y D.Applications of Large-Scale GenerativeLanguage Models in the Medical Field: Opportunities and Challenges [J].Journal of Medical Informatics,2023,44(9):1-11.
[10]HU Z S,YANG R,ZHU J H,et al.Research and Application Development of Large Language Models in the Medical Field [J].Artificial Intelligence,2023(4):10-19.
[11]OpenAI.GPT-4 Technical Report 2023[R].arXiv:2303.08774 [cs.CL].
[12]KARAN S,TAO T,JURAJ G,et al.Towards Expert-LevelMedical Question Answering with Large Language Models[J].arXiv:2305.09617,2023.
[13]KUNG T H,CHEATHAM M,MEDENILLA A,et al.Performance of ChatGPT on USMLE:Potential for AI-assisted medical education using large language models[J].PLOS Digit Health.2023 Feb 9;2(2):e0000198.
[14]Google.PaLM 2 Technical Report[OL].https://ai.google/sta-tic/documents/palm2tech report.pdf.2023.
[15]SINGHAL K,AZIZI S,TU T,et al.Large language models encode clinical knowledge[J].Nature,2023.DOI:10.1038/s41586-023-06291-2.
[16]AO T,SHEKOOFEH A,DANNY D,et al.Towards Generalist Biomedical AI[J].NEJM AI,2024,1(3):1-37.
[17]TAO T,ANIL P,MIKE S,et al.Towards Conversational Diagnostic AI.2024,arXiv:2401.05654v1[cs.CL].
[18]SHANG J J,LI X H.Challenges and Coping Strategies for the Digital Transformation of Education [J].Journal of East China Normal University(Educational Sciences),2023(3):72-81.
[19]LIU M,WU Z M,LIAO J,et al.Educational Applications of Large Language Models:Principles,Current Status, and Challenges-From Lightweight BERT to Conversational ChatGPT [J].Modern Educational Technology,2023,33(8):19-28.
[20]RAFFEL C,SHAZEER N,ROBERTS A,et al.Exploring the limits of transfer learning with a unified text-to text transformer[J].The Journal of Machine Learning Research,2020,21(1):5485-5551.
[1] ZOU Rui, YANG Jian, ZHANG Kai. Low-resource Vietnamese Speech Synthesis Based on Phoneme Large Language Model andDiffusion Model [J]. Computer Science, 2025, 52(6A): 240700138-6.
[2] ZHOU Lei, SHI Huaifeng, YANG Kai, WANG Rui, LIU Chaofan. Intelligent Prediction of Network Traffic Based on Large Language Model [J]. Computer Science, 2025, 52(6A): 241100058-7.
[3] BAI Yuntian, HAO Wenning, JIN Dawei. Study on Open-domain Question Answering Methods Based on Retrieval-augmented Generation [J]. Computer Science, 2025, 52(6A): 240800141-7.
[4] ZHANG Le, CHE Chao, LIANG Yan. Hallucinations Proactive Relief in Diabetes Q&A LLM [J]. Computer Science, 2025, 52(6A): 240700182-10.
[5] YIN Baosheng, ZONG Chen. Research on Semantic Fusion of Chinese Polysemous Words Based on Large LanguageModel [J]. Computer Science, 2025, 52(6A): 240400139-7.
[6] HU Caishun. Study on Named Entity Recognition Algorithms in Audit Domain Based on Large LanguageModels [J]. Computer Science, 2025, 52(6A): 240700190-4.
[7] LIANG Binghao, ZHANG Chuangang, YUAN Mingming. Large Model Driven AI Application Service Platform [J]. Computer Science, 2025, 52(6A): 240900022-4.
[8] LIU Qingyun, YOU Xiong, ZHANG Xin, ZUO Jiwei, LI Jia. Review of Path Planning Algorithms for Mobile Robots [J]. Computer Science, 2025, 52(6A): 240900074-10.
[9] SU Zhiyuan, ZHAO Lixu, HAO Zhiheng, BAI Rufeng. Suvery of Artificial Intelligence Ensuring eVTOL Flight Safety in the Context of Low-altitudeEconomy [J]. Computer Science, 2025, 52(6A): 250200050-13.
[10] YANG Jixiang, JIANG Huiping, WANG Sen, MA Xuan. Research Progress and Challenges in Forest Fire Risk Prediction [J]. Computer Science, 2025, 52(6A): 240400177-8.
[11] ZHAO Zheyu, WANG Zhongqing, WANG Hongling. Commodity Attribute Classification Method Based on Dual Pre-training [J]. Computer Science, 2025, 52(6A): 240500127-8.
[12] WANG Yun, ZHAO Jianming, GUO Yifeng, ZHOU Huanhuan, ZHOU Wuai, ZHANG Wanzhe, FENG Jianhua. Automation and Security Strategies and Empirical Research on Operation and Maintenance of Digital Government Database [J]. Computer Science, 2025, 52(6A): 240500045-8.
[13] LI Bo, MO Xian. Application of Large Language Models in Recommendation System [J]. Computer Science, 2025, 52(6A): 240400097-7.
[14] GAO Hongkui, MA Ruixiang, BAO Qihao, XIA Shaojie, QU Chongxiao. Research on Hybrid Retrieval-augmented Dual-tower Model [J]. Computer Science, 2025, 52(6): 324-329.
[15] TAN Zhengyuan, ZHONG Jiaqing, CHEN Juan. AI+HPC:An Overview of Supercomputing System Software and Application Technology Development Driven by “AI+” [J]. Computer Science, 2025, 52(5): 1-10.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!